Monitoring DFSR for a specific file - powershell

I have a script that creates a large template file (about 40-50Gb) within VMware. The Template file is created on a datastore that is replicated between 2 datacentres using DFSR.
I need to monitor the DFSR status in order that once a specific file has replicated, the script will go on to do some other bits and pieces.
$file = myfile.template
$timeout = new-timespan -minutes 5
start-sleep -seconds 5
$sw = [diagnostics.stopwatch]::StartNew()
while ($sw.elapsed -lt $timeout) {
$DFSRStatus = get-dfsrstatus | where{$_.name -eq $file}
if ($DFSRStatus.something -eq 'ok') {
return
}
start-sleep -seconds 5
}
Exit
Unfortunately, I've not used the DFSR Module. I've tried adding a file to check the replication status, to see what happens when a file gets replicated. But even with creating a 1 Gb test file, I wasn't able to see what happens when the replication is in progress and when it was successful.
Hopefully someone with more experience in DFSR can highlight what the right syntax is to check DFSR for a specific file and what the correct 'success' event is.

Another option: Instead of trying to monitor DFSR, just check to see if the file is at the target location, and is the correct size and date when compared to the original.

Related

How do I create a loop based on file size in power shell

I am working with intune and PowerShell and I basically want to run a exe file which downloads 15.2GB / 7932 files for the insulation off the autodesk website and then creates a text file so that intune knows that it's done as I want to delete all the install files with another script later.
The problem is the PowerShell script will run and close before it has finished downloading and intune thinks it is done and the next script tries to install what was downloaded but it is not fully downloaded so it fails.
I have tried to put a wait command but intune will just hang and you will have to restart windows which is something I don't want the users to do.
I am thinking to add a loop so ot checks the file size of the following folder:
C:\Autodesk\{E658F785-6D4D-4B7F-9BEB-C33C8E0027FA}
and once it reaches 15.2GB / 7932 files it goes to the next step and creates the text file.
Below is my current PowerShell script:
Start-Process -NoNewWindow -FilePath "\\arch-syd-fs\EM Setup\Autodesk Recap Custom Install 2023\Source 1 Download\Revit_2023.exe" -ArgumentList "--quiet' " -Wait
New-Item "C:\Temp\Revit 2023" -Type Directory
New-Item -Path "C:\Temp\Revit 2023\Download Done.txt"
Lets break this down into 3 questions
How do you check the size of a directory?
How do you check the count of files in a directory?
How do you make a script wait until these checks reach a certain value?
It turns out you can do the first 2 together
$dirStats = Get-ChildItem -Recurse -Force 'C:\path\to\whatever' | Measure-Object -Sum Length
$size = $dirStats.Sum
$fileCount = $dirStats.Count
Then you can wrap it in a do-until loop (with a delay to keep it from eating all the CPU) to make the script wait until those values reach a certan threshold
do {
Start-Sleep 5
$dirStats = Get-ChildItem -Recurse -Force 'C:\path\to\whatever' | Measure-Object -Sum Length
$size = $dirStats.Sum
$fileCount = $dirStats.Count
} until( ($size -ge 15.2*1024*1024*1024) -and ($fileCount -ge 7932) )
Note that $size is in bytes, and you might want to make that an -or condition rather than an -and depending on weather you want the script to return after either condition is met or wait for both.

How to maintain a Session info in Powershell script

I have script like below:
foreach ($var1 in (gc 1.txt)){
// my code logic
}
Here 1.txt file contains list of values like abc, xyz, pqr etc..,
If due to any of the script issues/ctrl+c stopped the script, need to restart the script from last stopped session.
To be clear if script has stopped at processing file at 'xyz' and when i restart the script it should process the logic from 'xyz' only but shouldn't restart from 'abc' again.
Please guide me to achieve this logic.
Thanks in advance,
Pavan kumar D
You need to add some counters and keep track of last iteration using and index file.
Then make sure that you start reading the file where you was interrupted using select -Skip
<#
.NOTE
The file index.txt is created at the first run and is used to store
the last line accessed of the file to process.
The index read from index.txt is used to skip by already processed
lines at the start of the foreach.
If the index doesn't need to be stored between session, then use
a $global.index instead to speed up the script, instead of an index
file (or a RAM-drive, not very common any more).
At each succefull iteration, the index is incremented and then stored in
index.txt.
When starting the stored index.txt is compared with the numbers of
lines in the file and will through an error if it's passed End Of File.
Make sure to **clear** the index.txt before starting the file process
fresh.
#>
#initialize counters
[int]$StartLine = Get-Content .\index.txt -ErrorAction SilentlyContinue
if (-not $StartLine){[int]$StartLine = 0} # First run will have no index file
$Index = $StartLine
[int]$LastLineOfFile = (get-content 1.txt).count - 1 # Arrays starts at 0
if ($Index -gt $LastLineOfFile){# Don't start if index passed EOF at last run
Write-Error "Index passed end of file"
return
}
foreach ($var in (Get-Content 1.txt | Select -Skip $StartLine)){
"New loop: $Index" | Out-Host # will start empty
"Processing value: $var" | Out-Host
####
# Processing here
####
"Done processing value: $var" | Out-Host
$Index++
$Index > index.txt
"Index incremented" | Out-Host
}
For a good article of using RAM drive, see, i.e. How to Create RAM Disk in Windows 10 for Super-Fast Read and Write Speeds
Windows Server do support RAM disks natively (kind of) by using the iSCSI Target Server.
See, How to Create a RAM Disk on Windows Server?

Fastest way to copy files (but not the entire directory) from one location to another

Summary
I am currently tasked with migrating around 6TB of data to a cloud server, and am trying to optimise how fast this can be done.
I would use standard Robocopy to do this usually, but there is a requirement that I am to only transfer files that are present in a filetable in SQL, and not the entire directories (due to a lot of junk being inside these folders that we do not want to migrate).
What I have tried
Feeding in individual files from an array into Robocopy is unfeasibly slow, as Robocopy instances were being started sequentially for each file, so I tried to speed up this process in 2 ways.
It was pointless to have /MT set above 1 if only one file was being transferred, so I attempted to simulate the multithreading feature. I did this by utilising the new ForEach-Object –Parallel feature in PowerShell 7.0, and setting the throttle limit to 4. With this, I was able to pass the array in and run 4 Robocopy jobs in parallel (still starting and stopping for each file), which increased speed a bit.
Secondly, I split the array into 4 equal arrays, and ran the above function across each array as a job, which again increased the speed by quite a bit. For clarity, I had equal 4 arrays fed to 4 ForEach-Object -Parallel code blocks that were running 4 Robocopy instances, so a total of 16 Robocopy instances at once.
Issues
I encountered a few problems.
My simulation of the multithreading feature did not behave in the way that the /MT flag works in Robocopy. When examining the processes running, my code executes 16 instances of Robocopy at once, whereas the normal /MT:16 flag of Robocopy would only kick off one Robocopy instance (but still be multithreading).
Secondly, the code causes a memory leak. The memory usage starts to increase when the jobs and accumulates over time, until a large portion of memory is being utilised. When the jobs complete, the memory usage is still high until I close PowerShell and the memory is released. Normal Robocopy did not do this.
Finally, I decided to compare the time taken for my method, and then a standard Robocopy of the entire testing directory, and the normal Robocopy was still over 10x faster, and had a better success rate (a lot of the files weren’t copied over with my code, and a lot of the time I was receiving error messages that the files were currently in use and couldn’t be Robocopied, presumably because they were in the process of being Robocopied).
Are there any faster alternatives, or is there a way to manually create a multithreading instance of robocopy that would perform like the /MT flag of the standard robocopy? I appreciate any insight/alternative ways of looking at this. Thanks!
#Item(0) is the Source excluding the filename, Item(2) is the Destination, Item(1) is the filename
$robocopy0 = $tables.Tables[0].Rows
$robocopy1 = $tables.Tables[1].Rows
$robocopy0 | ForEach-Object -Parallel {robocopy $_.Item(0) $_.Item(2) $_.Item(1) /e /w:1 /r:1 /tee /NP /xo /mt:1 /njh /njs /ns
} -ThrottleLimit 4 -AsJob
$robocopy1 | ForEach-Object -Parallel {robocopy $_.Item(0) $_.Item(2) $_.Item(1) /e /w:1 /r:1 /tee /NP /xo /mt:1 /njh /njs /ns
} -ThrottleLimit 4 -AsJob
#*8 for 8 arrays
RunspaceFactory multithreading might be optimally suited for this type of work--with one HUGE caveat. There are quite a few articles out on the net about it. Essentially you create a scriptblock that takes parameters for the source file to copy and the destination to write to and uses those parameters to execute robocopy against it. You create individual PowerShell instances to execute each variant of the scriptblock and append it to the RunspaceFactory. The RunspaceFactory will queue up the jobs and work against the probably millions of jobs X number at a time, where X is equal to the number of threads you allocate for the factory.
CAVEAT: First and foremost, to queue up millions of jobs relative to the probable millions of files you have across 6TB, you'll likely need monumental amounts of memory. Assuming an average path length for source and destination of 40 characters (probably very generous) * a WAG of 50 million files is nearly 4GB in memory by itself, which doesn't include object structural overhead, the PowerShell instances, etc. You can overcome this either breaking up the job into smaller chunks or use a server with 128GB RAM or better. Additionally, if you don't terminate the jobs once they've been processed, you'll also experience what appears to be a memory leak but is just your jobs producing information that you're not closing when completed.
Here's a sample from a recent project I did migrating files from an old domain NAS to a new domain NAS -- I'm using Quest SecureCopy instead of RoboCopy but you should be able to easily replace those bits:
## MaxThreads is an arbitrary number I use relative to the hardware I have available to run jobs I'm working on.
$FileRSpace_MaxThreads = 15
$FileRSpace = [runspacefactory]::CreateRunspacePool(1, $FileRSpace_MaxThreads, ([System.Management.Automation.Runspaces.InitialSessionState]::CreateDefault()), $Host)
$FileRSpace.ApartmentState = 'MTA'
$FileRSpace.Open()
## The scriptblock that does the actual work.
$sb = {
param(
$sp,
$dp
)
## This is my output object I'll emit through STDOUT so I can consume the status of the job in the main thread after each instance is completed.
$n = [pscustomobject]#{
'source' = $sp
'dest' = $dp
'status' = $null
'sdtm' = [datetime]::Now
'edtm' = $null
'elapsed' = $null
}
## Remove the Import-Module and SecureCopy cmdlet and replace it with the RoboCopy version
try {
Import-Module "C:\Program Files\Quest\Secure Copy 7\SCYPowerShellCore.dll" -ErrorAction Stop
Start-SecureCopyJob -Database "C:\Program Files\Quest\Secure Copy 7\SecureCopy.ssd" -JobName "Default" -Source $sp -Target $dp -CopySubFolders $true -Quiet $true -ErrorAction Stop | Out-Null
$n.status = $true
} catch {
$n.status = $_
}
$n.edtm = [datetime]::Now
$n.elapsed = ("{0:N2} minutes" -f (($n.edtm - $n.sdtm).TotalMinutes))
$n
}
## The array to hold the individual runspaces and ulitimately iterate over to watch for completion.
$FileWorkers = #()
$js = [datetime]::now
log "Job starting at $js"
## $peers is a [pscustomobject] I precreate that just contains every source (property 's') and the destination (property 'd') -- modify to suit your needs as necessary
foreach ($c in $peers) {
try {
log "Configuring migration job for '$($c.s)' and '$($c.d)'"
$runspace = [powershell]::Create()
[void]$runspace.AddScript($sb)
[void]$runspace.AddArgument($c.s)
[void]$runspace.AddArgument($c.d)
$runspace.RunspacePool = $FileRSpace
$FileWorkers += [pscustomobject]#{
'Pipe' = $runspace
'Async' = $runspace.BeginInvoke()
}
log "Successfully created a multi-threading job for '$($c.s)' and '$($c.d)'"
} catch {
log "An error occurred creating a multi-threading job for '$($c.s)' and '$($c.d)'"
}
}
while ($FileWorkers.Async.IsCompleted -contains $false) {
$Completed = $FileWorkers | ? { $_.Async.IsCompleted -eq $true }
[pscustomobject]#{
'Numbers' = ("{0}/{1}" -f $Completed.Count, $FileWorkers.Count)
'PercComplete' = ("{0:P2}" -f ($Completed.Count / $FileWorkers.Count))
'ElapsedMins' = ("{0:N2}" -f ([datetime]::Now - $js).TotalMinutes)
}
$Completed | % { $_.Pipe.EndInvoke($_.Async) } | Export-Csv -NoTypeInformation ".\$($DtmStamp)_SecureCopy_Results.csv"
Start-Sleep -Seconds 15
}
## This is to handle a race-condition where the final job(s) aren't completed before the sleep but do when the while is re-eval'd
$FileWorkers | % { $_.Pipe.EndInvoke($_.Async) } | Export-Csv -NoTypeInformation ".\$($DtmStamp)_SecureCopy_Results.csv"
Suggested strategies if you don't have a beefy server to queue up all the jobs simultaneously is to either batch out the files in statically sized blocks (e.g. 100,000 or whatever your hw can take) or you could group files together to send to each script block (e.g. 100 files per scriptblock) which would minimize the number of jobs to queue up in the runspace factory (but would require some code change).
HTH
Edit 1: To Address constructing the input object I'm using
$destRoot = '\\destinationserver.com\share'
$peers = #()
$children = #()
$children += (get-childitem '\\sourceserver\share' -Force) | Select -ExpandProperty FullName
foreach ($c in $children) {
$peers += [pscustomobject]#{
's' = $c
'd' = "$($destRoot)\$($c.Split('\')[3])\$($c | Split-Path -Leaf)"
}
}
In my case, I was taking stuff from \server1\share1\subfolder1 and moving it to something like \server2\share1\subfolder1\subfolder2. So in essence, all the '$peers' array is doing is constructing an object that took in the fullname of the source target and constructing the corresponding destination path (since the source/dest server names are different and possibly share name too).
You don't have to do this, you can dynamically construct the destination and just loop through the source folders. I perform this extra step because now I have a two property array that I can verify is pre-constructed accurately as well as perform tests to ensure things exist and are accessible.
There is a lot of extra bloat in my script due to custom objects meant to give me output from each thread put into the multi-threader so I can see the status of each copy attempt--to track things like folders that were successful or not, how long it took to perform that individual copy, etc. If you're using robocopy and dumping the results to a text file, you may not need this. If you want me to pair down script to it's barebone components just to get things multi-threading, I can do that if you like.

Alert if no files written > 7 days

Each week we have a backup file replicate over to a folder on our file server. I'm looking for a way to notify me if a file has not been written to that folder in over 7 days.
I found this script online (I apologize to the author for not being able to credit), and I feel like it put's me on the right track. What I'm really looking for though is some kind of output that will tell me if a file hasn't been written at all. I don't need confirmation if the backup is successful.
$lastWrite = (get-item C:\ExampleDirectory).LastWriteTime
$timespan = new-timespan -days 7
if (((get-date) - $lastWrite) -gt $timespan) {
# older
} else {
# newer
}
Want you'll want to do is grab all files in the directory, sort by LastWriteTime and then compare that of the newest file to 7 days ago:
$LastWriteTime = (Get-ChildItem C:\ExampleDirectory |Sort LastWriteTime)[-1].LastWriteTime
if($LastWriteTime -gt [DateTime]::Now.AddDays(-7))
{
# File newer than 7 days is present
}
else
{
# Something is wrong, time to alert!
}
For the alerting part, check out Send-MailMessage or Write-EventLog

How to copy file for one time a day - elegant solution

I have remote server, where will be uploaded one file per day. I don't know when the file will be uploaded. I need to COPY this file to another server for processing and I need to do this just once per file (once a day). When the file is uploaded on remote server, I need to copy it within a hour, so I have to run this script at least once per hour. I'm using this script:
# Get yesterday date
$date = (Get-Date).Adddays(-1) | Get-Date -Format yyyyMMdd
$check = ""
$check = Get-Content c:\checkiftransfered.txt
# Test if file checkiftransfered.txt contains True or False. If it contains True, file for this day was already copyied
if ($check -ne "True") {
#Test if file exists - it has specific name and yesterday date
if(Test-Path \\remoteserver\folder\abc_$date.xls) {
Copy-Item \\remoteserver\folder\abc_$date.xls \\remoteserver2\folder\abc_$date.xls
# Write down information that file was already copyied
$check = "True" | Out-File c:\checkiftransfered.txt
} else { Write-Host "File has not been uploaded."}
} else { Write-Host "File has been copyied."}
# + I will need another script that will delete the checkiftransfered.txt at 0:00
It will work fine, I think, but I'm looking for more elegant solution - the best way how to solve it. Thank you
In PowerShell V3, Test-Path has a handy -NewerThan and -OlderThan parameters so you could simplify to this:
$yesterday = (Get-Date).AddDays(-1)
$date = $yesterday | Get-Date -Format yyyyMMdd
$path = "\\remoteserver\folder\abc_$date.xls"
if (Test-Path $path -NewerThan $yesterday)
{
Copy-Item $path \\remoteserver2\folder\abc_$date.xls -Verbose
(Get-Item $path).LastWriteTime = $yesterday
}
This eliminates the need to track copy status in a separate by using the LastWriteTime. One note about using -NewerThan and -OlderThan - don't use them together. It doesn't work as expected.
And lest we forget about some great native tools, here's a solution using robocopy:
robocopy $srcdir $destdir /maxage:1 /mot:60
The /mot:n option will cause robocopy to continuously monitor the source dir - every 60 minutes as specified above.
There is a much, much easier and more reliable way. You can use the FileSystemWatcher class.
$watcher = New-Object System.IO.FileSystemWatcher
$watcher.Path = 'C:\Uploads'
$watcher.IncludeSubdirectories = $true
$watcher.EnableRaisingEvents = $true
$created = Register-ObjectEvent $watcher "Created" -Action {
Sleep (30*60)
Copy-Item $($eventArgs.FullPath) '\\remoteserver2\folder\'
}
So lets take a look at what we doing here, we create a new watcher and tell it to watch C:\Uploads when a new file is uploaded there the file system sends a notification through the framework to our program, which in turn fires the created event. When that happens, we tell our program to sleep to for 30 minutes to allow the upload to finish (that may be to long depending on the size of the upload) then we call Copy-Item on the event arguments which contains a full path to our new file.
By the way you would need to paste this in a powershell window and leave it open on the server, alternatively you could use the ISE and leave that open. Either way it is way more reliable that what you currently have.