How to use an array in a zip function using powershell? - powershell

I am still pretty new to scripting and "programming" at all. if you miss any information here let me know.
This is my working zip function:
$folder = "C:\zipthis\"
$destinationFilePath = "C:\_archive\zipped"
function create-7zip{
param([string] $folder,
[String] $destinationFilePath)
write-host $folder $destinationFilePath
[string]$pathToZipExe = "C:\Program Files (x86)\7-Zip\7zG.exe";
[Array]$arguments = "a", "-tzip", "$destinationFilePath", "$folder";
& $pathToZipExe $arguments;
}
Get-ChildItem $folder | ? { $_.PSIsContainer} | % {
write-host $_.BaseName $_.Name;
$dest= [System.String]::Concat($destPath,$_.Name,".zip");
(create-7zip $_.FullName $dest)
}
create-7zip $folder $destinationFilePath
now I want him to zip special folders which I already sorted out :
get-childitem "C:\zipme\" | where-Object {$_.name -eq "www" -or $_.name -eq "sql" -or $_.name -eq "services"}
This small function finds the 3 folders I need called www, sql and services. But I didn't manage to insert this into my zip function, so that exactly this folders are zipped and put into C:\_archive\zipped
Because a string is used instead of an array, he tried always to look for a folder called wwwsqlservice which is not there. I tried to put an array using #(www,sql,services) but i had no success, so whats the right way, if there is one?
It should compatible with powershell 2.0, no ps3.0 cmdlets or functions please.
thanks in advance!

Here's a really simple example of what you want to do, removed from the context of your function. It assumes that your destination folders already exist (You can just use Test-Path and New-Item to create them if they don't), and that you're using 7z.exe.
$directories = #("www","sql","services")
$archiveType = "-tzip"
foreach($dir in $directories)
{
# Use $dir to update the destination each loop to prevent overwrites!
$sourceFilePath = "mySourcePath\$dir"
$destinationFilePath = "myTargetPath\$dir"
cmd /c "$pathToZipExe a $archiveType $destinationFilePath $sourceFilePath"
}
Overall it looks like you got pretty close to a solution, with some minor changes needed to support the foreach loop. If you're confident that create-7zip works fine for a single folder, you can substitute that for the cmd /c line above. Here's a listing of some handy example usages for 7zip on the command line.

Related

How do I get the path of a directory where a file SQL backup is missing?

The task is to track the execution of SQL backups. One of the tracking mechanisms is to browse the directory where the backups are saved and if the file is missing, you can send an email with the full address of the directory where the file is missing. This is what I am trying to implement with Powershell script, but for some reason my script doesn't work and doesn't give me the information I need.
$list = Get-ChildItem -Path network_path_to_share -Recurse -File | Where-Object {$_.DirectoryName -like '*FULL*' -and $_.LastWriteTime -ge (Get-Date).AddDays(-6)}
$fileExists = Test-Path $list
If ($fileExists)
{
#Do Nothing
}
Else
{
$list | Select DirectoryName
}
Can anyone help?
I suppose what you need is to test each file or path individually. You take Get-ChildItem with recurse, so it returns multiple files and stores them in $list.
If you do something like
Foreach ($item in $list) {
$fileexists = Test-Path $item
If ($fileexists -eq $false) {
do something }
}
You should be good to go. This would cycle through all items and does whatever you need to be done. If you compare against $false, you wouldn't need the else statement, and you could also just put "Test-Path" into the if-statement like
If (Test-Path $item -eq $false) {}
Edit: Sorry I accidentally posted the answer before finishing it lol
Also, as stackprotector correctly points out, Get-ChildItem can only retrieve items that exist, because how should it detect missing files.
If you're wanting to check for something that is missing or doesn't exist, you need to start with a known condition, e.g.: either the server names or expected file or directory names.
If you know that, then you can create a static list (or dynamically query a list from Active Directory for your SQL servers or something (assuming the backup file names correspond to the server names)) and then check the files that were created and output the missing ones for triage.
Here is a modification to your script (essentially the opposite of what you did) that might point you in the right direction:
## List of expected files
$ExpectedFiles = #(
'File1.bak',
'File2.bak',
'File3.bak',
'File4.bak'
'...'
)
## Get a list of created files
$list = Get-ChildItem -Path network_path_to_share -Recurse -File | Where-Object {$_.DirectoryName -like '*FULL*' -and $_.LastWriteTime -ge (Get-Date).AddDays(-6)} | Select -ExpandProperty Name
## Check whether each expected file exists in the array of backups that actually were created
foreach ($file in $ExpectedFiles) {
if (-not(Test-Path $list)) {
"$($file) is missing!"
}
}

Powershell dropping characters while creating folder names

I am having a strange problem in Powershell (Version 2021.8.0) while creating folders and naming them. I start with a number of individual ebook files in a folder that I set using Set-Location. I use the file name minus the extension to create a new folder with the same name as the e-book file. The code works fine the majority of the time with various file extensions I have stored in an array beginning of the code.
What's happening is that the code creates the proper folder name the majority of the time and moves the source file into the folder after it's created.
The problem is, if the last letter of the source file name, on files with the extension ".epub" end in an "e", then the "e" is missing from the end of the created folder name. I thought that I saw it also drop "r" and "p" but I have been unable to replicate that error recently.
Below is my code. It is set up to run against file extensions for e-books and audiobooks. Please ignore the error messages that are being generated when files of a specific type don't exist in the working folder. I am just using the array for testing and it will be filled automatically later by reading the folder contents.
This Code Creates a Folder for Each File and moves the file into that Folder:
Clear-Host
$SourceFileFolder = 'N:\- Books\- - BMS\- Books Needing Folders'
Set-Location $SourceFileFolder
$MyArray = ( "*.azw3", "*.cbz", "*.doc", "*.docx", "*.djvu", "*.epub", "*.mobi", "*.mp3", "*.pdf", "*.txt" )
Foreach ($FileExtension in $MyArray) {
Get-ChildItem -Include $FileExtension -Name -Recurse | Sort-Object | ForEach-Object { $SourceFileName = $_
$NewDirectoryName = $SourceFileName.TrimEnd($FileExtension)
New-Item -Name $NewDirectoryName -ItemType "directory"
$OriginalFileName = Join-Path -Path $SourceFileFolder -ChildPath $SourceFileName
$DestinationFilename = Join-Path -Path $NewDirectoryName -ChildPath $SourceFileName
$DestinationFilename = Join-Path -Path $SourceFileFolder -ChildPath $DestinationFilename
Move-Item $OriginalFileName -Destination $DestinationFilename
}
}
Thanks for any help you can give. Driving me nuts and I am pretty sure it's something that I am doing wrong, like always.
String.TrimEnd()
Removes all the trailing occurrences of a set of characters specified in an array from the current string.
TrimEnd method will remove all characters that matches in the character array you provided. It does not look for whether or not .epub is at the end of the string, but rather it trims out any of the characters in the argument supplied from the end of the string. In your case, all dots,e,p,u,b will be removed from the end until no more of these characters are within the string. Now, you will eventually (and you do) remove more than what you intended for.
I'd suggest using EndsWith to match your extensions and performing a substring selection instead, as below. If you deal only with single extension (eg: not with .tar.gz or other double extensions type), you can also use the .net [System.IO.Path]::GetFileNameWithoutExtension($MyFileName) method.
$MyFileName = "Teste.epub"
$FileExt = '.epub'
# Wrong approach
$output = $MyFileName.TrimEnd($FileExt)
write-host $output -ForegroundColor Yellow
#Output returns Test
# Proper method
if ($MyFileName.EndsWith($FileExt)) {
$output = $MyFileName.Substring(0,$MyFileName.Length - $FileExt.Length)
Write-Host $output -ForegroundColor Cyan
}
# Returns Tested
#Alternative method. Won't work if you want to trim out double extensions (eg. tar.gz)
if ($MyFileName.EndsWith($FileExt)) {
$Output = [System.IO.Path]::GetFileNameWithoutExtension($MyFileName)
Write-Host $output -ForegroundColor Cyan
}
You're making this too hard on yourself. Use the .BaseName to get the filename without extension.
Your code simplified:
$SourceFileFolder = 'N:\- Books\- - BMS\- Books Needing Folders'
$MyArray = "*.azw3", "*.cbz", "*.doc", "*.docx", "*.djvu", "*.epub", "*.mobi", "*.mp3", "*.pdf", "*.txt"
(Get-ChildItem -Path $SourceFileFolder -Include $MyArray -File -Recurse) | Sort-Object Name | ForEach-Object {
# BaseName is the filename without extension
$NewDirectory = Join-Path -Path $SourceFileFolder -ChildPath $_.BaseName
$null = New-Item -Path $NewDirectory -ItemType Directory -Force
$_ | Move-Item -Destination $NewDirectory
}

Parse directory listing and pass to another script?

I am trying to write a PowerShell script that will loop through a directory in C:\ drive and parse the filenames with the file extension to another script to use.
Basically, the output of the directory listing should be accessible to be parsed to another script one by one. The script is a compiling script which expects an argument (parameter) to be parsed to it in order to compile the specific module (filename).
Code:
Clear-Host $Path = "C:\SandBox\"
Get-ChildItem $Path -recurse -force | ForEach { If ($_.extension -eq ".cob")
{
Write-Host $_.fullname
}
}
If ($_.extension -eq ".pco")
{
Write-Host $_.fullname }
}
You don't need to parse the output as text, that's deprecated.
Here's something that might work for you:
# getmyfiles.ps1
Param( [string])$Path = Get-Location )
dir $Path -Recurse -Force | where {
$_.Extension -in #('.cob', '.pco')
}
# this is another script that calls the above
. getmyfile.ps1 -Path c:\sandbox | foreach-object {
# $_ is a file object. I'm just printing its full path but u can do other stuff eith it
Write-host $_.Fullname
}
Clear-Host
$Path = "C:\Sandbox\"
$Items = Get-ChildItem $Path -recurse -Include "*.cob", "*.pco"
From your garbled code am guessing you want to return a list of files that have .cob and .pco file extensions. You could use the above code to gather those.
$File = $Items.name
$FullName = $items.fullname
Write-Host $Items.name
$File
$FullName
Adding the above lines will allow you to display them in various ways. You can pick the one that suites your needs then loop through them on a for-each.
As a rule its not a place for code to be writen for you, but you have tried to add some to the questions so I've taken a look. Sometimes you just want a nudge in the right direction.

How many ways to check if a script was "successful" by using Powershell?

I am still very new and I have for example one script to backup some folders by zipping and copying them to a newly created folder.
Now I want to know if the zip and copy process was successful, by successful i mean if my computer zipped and copied it. I don't want to check the content, so I assume that my script took the right folders and zipped them.
Here is my script :
$backupversion = "1.65"
# declare variables for zip
$folder = "C:\com\services" , "C:\com\www"
$destPath = "C:\com\backup\$backupversion\"
# Create Folder for the zipped services
New-Item -ItemType directory -Path "$destPath"
#Define zip function
function create-7zip{
param([String] $folder,
[String] $destinationFilePath)
write-host $folder $destinationFilePath
[string]$pathToZipExe = "C:\Program Files (x86)\7-Zip\7zG.exe";
[Array]$arguments = "a", "-tzip", "$destinationFilePath", "$folder";
& $pathToZipExe $arguments;
}
Get-ChildItem $folder | ? { $_.PSIsContainer} | % {
write-host $_.BaseName $_.Name;
$dest= [System.String]::Concat($destPath,$_.Name,".zip");
(create-7zip $_.FullName $dest)
}
Now I can either check if in the parentfolder is a newly created folder by time.
Or i can check if there are zip folders in my subfolders I created.
What way would you suggest? I probably just know this ways, but there are a million way to do this.
Whats your idea? The only rule is , that powershell should be used.
thanks in advance
You could try using the Try and Catch method by wrapping the (create-7zip $_.FullName $dest) with a try and then catch any errors:
Try{ (create-7zip $_.FullName $dest) }
Catch{ Write-Host $error[0] }
This will Try the function create-7zip and write any the errors that many accrue to the shell.
One thing that can be tried is checking the $? variable for the status of the command.
$? stores the status of the last command run,
So for
create-7zip $_.FullName $dest
If you then echo out $? you will see either true or false.
Another option is the $error variable
You can also combine these in all sorts of ways (Or with the exception handling).
For example, run your command
foreach-object {
create-7zip $_.FullName $dest
if (!$?) {"$_.FullName $ErrorVariable" | out-file Errors.txt}
}
That script is more pseudocode for ideas than working code, but it should at least get you close to using it!

How to retrieve a recursive directory and file list from PowerShell excluding some files and folders?

I want to write a PowerShell script that will recursively search a directory, but exclude specified files (for example, *.log, and myFile.txt), and also exclude specified directories, and their contents (for example, myDir and all files and folders below myDir).
I have been working with the Get-ChildItem CmdLet, and the Where-Object CmdLet, but I cannot seem to get this exact behavior.
I like Keith Hill's answer except it has a bug that prevents it from recursing past two levels. These commands manifest the bug:
New-Item level1/level2/level3/level4/foobar.txt -Force -ItemType file
cd level1
GetFiles . xyz | % { $_.fullname }
With Hill's original code you get this:
...\level1\level2
...\level1\level2\level3
Here is a corrected, and slightly refactored, version:
function GetFiles($path = $pwd, [string[]]$exclude)
{
foreach ($item in Get-ChildItem $path)
{
if ($exclude | Where {$item -like $_}) { continue }
$item
if (Test-Path $item.FullName -PathType Container)
{
GetFiles $item.FullName $exclude
}
}
}
With that bug fix in place you get this corrected output:
...\level1\level2
...\level1\level2\level3
...\level1\level2\level3\level4
...\level1\level2\level3\level4\foobar.txt
I also like ajk's answer for conciseness though, as he points out, it is less efficient. The reason it is less efficient, by the way, is because Hill's algorithm stops traversing a subtree when it finds a prune target while ajk's continues. But ajk's answer also suffers from a flaw, one I call the ancestor trap. Consider a path such as this that includes the same path component (i.e. subdir2) twice:
\usr\testdir\subdir2\child\grandchild\subdir2\doc
Set your location somewhere in between, e.g. cd \usr\testdir\subdir2\child, then run ajk's algorithm to filter out the lower subdir2 and you will get no output at all, i.e. it filters out everything because of the presence of subdir2 higher in the path. This is a corner case, though, and not likely to be hit often, so I would not rule out ajk's solution due to this one issue.
Nonetheless, I offer here a third alternative, one that does not have either of the above two bugs. Here is the basic algorithm, complete with a convenience definition for the path or paths to prune--you need only modify $excludeList to your own set of targets to use it:
$excludeList = #("stuff","bin","obj*")
Get-ChildItem -Recurse | % {
$pathParts = $_.FullName.substring($pwd.path.Length + 1).split("\");
if ( ! ($excludeList | where { $pathParts -like $_ } ) ) { $_ }
}
My algorithm is reasonably concise but, like ajk's, it is less efficient than Hill's (for the same reason: it does not stop traversing subtrees at prune targets). However, my code has an important advantage over Hill's--it can pipeline! It is therefore amenable to fit into a filter chain to make a custom version of Get-ChildItem while Hill's recursive algorithm, through no fault of its own, cannot. ajk's algorithm can be adapted to pipeline use as well, but specifying the item or items to exclude is not as clean, being embedded in a regular expression rather than a simple list of items that I have used.
I have packaged my tree pruning code into an enhanced version of Get-ChildItem. Aside from my rather unimaginative name--Get-EnhancedChildItem--I am excited about it and have included it in my open source Powershell library. It includes several other new capabilities besides tree pruning. Furthermore, the code is designed to be extensible: if you want to add a new filtering capability, it is straightforward to do. Essentially, Get-ChildItem is called first, and pipelined into each successive filter that you activate via command parameters. Thus something like this...
Get-EnhancedChildItem –Recurse –Force –Svn
–Exclude *.txt –ExcludeTree doc*,man -FullName -Verbose
... is converted internally into this:
Get-ChildItem | FilterExcludeTree | FilterSvn | FilterFullName
Each filter must conform to certain rules: accepting FileInfo and DirectoryInfo objects as inputs, generating the same as outputs, and using stdin and stdout so it may be inserted in a pipeline. Here is the same code refactored to fit these rules:
filter FilterExcludeTree()
{
$target = $_
Coalesce-Args $Path "." | % {
$canonicalPath = (Get-Item $_).FullName
if ($target.FullName.StartsWith($canonicalPath)) {
$pathParts = $target.FullName.substring($canonicalPath.Length + 1).split("\");
if ( ! ($excludeList | where { $pathParts -like $_ } ) ) { $target }
}
}
}
The only additional piece here is the Coalesce-Args function (found in this post by Keith Dahlby), which merely sends the current directory down the pipe in the event that the invocation did not specify any paths.
Because this answer is getting somewhat lengthy, rather than go into further detail about this filter, I refer the interested reader to my recently published article on Simple-Talk.com entitled Practical PowerShell: Pruning File Trees and Extending Cmdlets where I discuss Get-EnhancedChildItem at even greater length. One last thing I will mention, though, is another function in my open source library, New-FileTree, that lets you generate a dummy file tree for testing purposes so you can exercise any of the above algorithms. And when you are experimenting with any of these, I recommend piping to % { $_.fullname } as I did in the very first code fragment for more useful output to examine.
The Get-ChildItem cmdlet has an -Exclude parameter that is tempting to use but it doesn't work for filtering out entire directories from what I can tell. Try something like this:
function GetFiles($path = $pwd, [string[]]$exclude)
{
foreach ($item in Get-ChildItem $path)
{
if ($exclude | Where {$item -like $_}) { continue }
if (Test-Path $item.FullName -PathType Container)
{
$item
GetFiles $item.FullName $exclude
}
else
{
$item
}
}
}
Here's another option, which is less efficient but more concise. It's how I generally handle this sort of problem:
Get-ChildItem -Recurse .\targetdir -Exclude *.log |
Where-Object { $_.FullName -notmatch '\\excludedir($|\\)' }
The \\excludedir($|\\)' expression allows you to exclude the directory and its contents at the same time.
Update: Please check the excellent answer from msorens for an edge case flaw with this approach, and a much more fleshed out solution overall.
Recently, I explored the possibilities to parameterize the folder to scan through and the place where the result of recursive scan will be stored. At the end, I also did summarize the number of folders scanned and number of files inside as well. Sharing it with community in case it may help other developers.
##Script Starts
#read folder to scan and file location to be placed
$whichFolder = Read-Host -Prompt 'Which folder to Scan?'
$whereToPlaceReport = Read-Host -Prompt 'Where to place Report'
$totalFolders = 1
$totalFiles = 0
Write-Host "Process started..."
#IMP separator ? : used as a file in window cannot contain this special character in the file name
#Get Foldernames into Variable for ForEach Loop
$DFSFolders = get-childitem -path $whichFolder | where-object {$_.Psiscontainer -eq "True"} |select-object name ,fullName
#Below Logic for Main Folder
$mainFiles = get-childitem -path "C:\Users\User\Desktop" -file
("Folder Path" + "?" + "Folder Name" + "?" + "File Name " + "?"+ "File Length" )| out-file "$whereToPlaceReport\Report.csv" -Append
#Loop through folders in main Directory
foreach($file in $mainFiles)
{
$totalFiles = $totalFiles + 1
("C:\Users\User\Desktop" + "?" + "Main Folder" + "?"+ $file.name + "?" + $file.length ) | out-file "$whereToPlaceReport\Report.csv" -Append
}
foreach ($DFSfolder in $DFSfolders)
{
#write the folder name in begining
$totalFolders = $totalFolders + 1
write-host " Reading folder C:\Users\User\Desktop\$($DFSfolder.name)"
#$DFSfolder.fullName | out-file "C:\Users\User\Desktop\PoC powershell\ok2.csv" -Append
#For Each Folder obtain objects in a specified directory, recurse then filter for .sft file type, obtain the filename, then group, sort and eventually show the file name and total incidences of it.
$files = get-childitem -path "$whichFolder\$($DFSfolder.name)" -recurse
foreach($file in $files)
{
$totalFiles = $totalFiles + 1
($DFSfolder.fullName + "?" + $DFSfolder.name + "?"+ $file.name + "?" + $file.length ) | out-file "$whereToPlaceReport\Report.csv" -Append
}
}
# If running in the console, wait for input before closing.
if ($Host.Name -eq "ConsoleHost")
{
Write-Host ""
Write-Host ""
Write-Host ""
Write-Host " **Summary**" -ForegroundColor Red
Write-Host " ------------" -ForegroundColor Red
Write-Host " Total Folders Scanned = $totalFolders " -ForegroundColor Green
Write-Host " Total Files Scanned = $totalFiles " -ForegroundColor Green
Write-Host ""
Write-Host ""
Write-Host "I have done my Job,Press any key to exit" -ForegroundColor white
$Host.UI.RawUI.FlushInputBuffer() # Make sure buffered input doesn't "press a key" and skip the ReadKey().
$Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyUp") > $null
}
##Output
##Bat Code to run above powershell command
#ECHO OFF
SET ThisScriptsDirectory=%~dp0
SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File ""%PowerShellScriptPath%""' -Verb RunAs}";
A bit late, but try this one.
function Set-Files($Path) {
if(Test-Path $Path -PathType Leaf) {
# Do any logic on file
Write-Host $Path
return
}
if(Test-Path $path -PathType Container) {
# Do any logic on folder use exclude on get-childitem
# cycle again
Get-ChildItem -Path $path | foreach { Set-Files -Path $_.FullName }
}
}
# call
Set-Files -Path 'D:\myFolder'
Commenting here as this seems to be the most popular answer on the subject for searching for files whilst excluding certain directories in powershell.
To avoid issues with post filtering of results (i.e. avoiding permission issues etc), I only needed to filter out top level directories and that is all this example is based on, so whilst this example doesn't filter child directory names, it could very easily be made recursive to support this, if you were so inclined.
Quick breakdown of how the snippet works
$folders << Uses Get-Childitem to query the file system and perform folder exclusion
$file << The pattern of the file I am looking for
foreach << Iterates the $folders variable performing a recursive search using the Get-Childitem command
$folders = Get-ChildItem -Path C:\ -Directory -Name -Exclude Folder1,"Folder 2"
$file = "*filenametosearchfor*.extension"
foreach ($folder in $folders) {
Get-Childitem -Path "C:/$folder" -Recurse -Filter $file | ForEach-Object { Write-Output $_.FullName }
}