PowerShell Job throttle blocking the server - powershell

I'm experiencing a strange issue with our script server being overloaded and running out of resources. We have a script that copies data from one location to another, this is defined in a large input file that contains over 200 lines of text in the format 'Source path, Destination path'.
We are now in the process of trying to throttle the maximum jobs we kick of at once and I think it's working fine. But for some reason or another we're still running out of resources on the server when the input file contains over 94 lines. This became apparent after some testing.
We tried to upgrade our Windows 2008 R2 server with PowerShell 4.0 to 4 processors and 8 GB of RAM, but no luck. So I assume my throttling isn't working as designed.
Error code:
Insufficient system resources exist to complete the requested service.
The code:
$MaxThreads = 4
$FunctionFeed = Import-Csv -Path $File -Delimiter ',' -Header 'Source', 'Destination'
$Jobs=#()
Function Wait-MaxRunningJobs {
Param (
$Name,
[Int]$MaxThreads
)
Process {
$Running = #($Name | where State -eq Running)
while ($Running.Count -ge $MaxThreads) {
$Finished = Wait-Job -Job $Name -Any
$Running = #($Name | where State -eq Running)
}
}
}
$ScriptBlock = {
Try {
Robocopy.exe $Using:Line.Source $Using:Line.Destination $Using:Line.File /MIR /Z /R:3 /W:15 /NP /MT:8 | Out-File $Using:LogFile
[PSCustomObject]#{
Source = if ($Using:Line.Source) {$Using:Line.Source} else {'NA'}
Target = if ($Using:Line.Destination) {$Using:Line.Destination} else {'NA'}
}
}
Catch {
"Robocopy | ERROR: $($Error[0].Exception.Message)" |
Out-File -LiteralPath $Using:LogFile
throw $($Error[0].Exception.Message)
}
}
ForEach ($Line in $FunctionFeed) {
$LogParams = #{
LogFolder = $LogFolder
Name = $Line.Destination + '.log'
Date = 'ScriptStartTime'
Unique = $True
}
$LogFile = New-LogFileNameHC #LogParams
' ' >> $LogFile # Avoid not being able to write to log
$Jobs += Start-Job -Name RoboCopy -ScriptBlock $ScriptBlock
Wait-MaxRunningJobs -Name $Jobs -MaxThreads $MaxThreads
}
if ($Jobs) {
Wait-Job -Job $Jobs
$JobResults = $Jobs | Receive-Job
}
Am I missing something here? Thank you for your help.

You're using background jobs, which actually run in remote sessions on the local machine. Remote sessions are intentionally resource restricted, according to settings set in the session configuration. You can check the current settings using
Get-PSSessionConfiguration
And adjust the settings to increase the resources available to the sessions with
Set-PSSessionConfiguration
You may need to do some testing to determine exactly what resource limit you're hitting, and what adjustments need to be made for this particular application to work.

Fixed the problem by enlarging the MaxMemoryPerShellMB for remote sessions from 1GB to 2 GB as described here. Keep in mind that Start-Job is using a remote PowerShell session as mjolinor already indicated, so this variable is applicable to PowerShell jobs.
Solution:
# 'System.OutOfMemoryException error message' when running Robocopy and over 94 PowerShell-Jobs:
Get-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB # Default 1024
Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 2048
# Set PowerShell plugins memory from 1 GB to 2 GB
Get-Item WSMan:\localhost\Plugin\Microsoft.PowerShell\Quotas\MaxMemoryPerShellMB # Default 1024
Set-Item WSMan:\localhost\Plugin\Microsoft.PowerShell\Quotas\MaxMemoryPerShellMB 2048
Restart-Service winrm

Related

How to Ignore lines with StreamWriter WriteLine

trying to figure out how to ignore or stop specific lines from being written to file with StreamWriter. Here is the code I'm working with from How to pass arguments to program when using variable as path :
$LogDir = "c:\users\user" # Log file output directory
$PlinkDir = "C:" # plink.exe directory
$SerialIP = "1.1.1.1" # serial device IP address
$SerialPort = 10000 # port to log
function CaptureWeight {
Start-Job -Name WeightLog -ScriptBlock {
filter timestamp {
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
try {
$sw = [System.IO.StreamWriter]::new("$using:LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt")
& "$using:PlinkDir\plink.exe" -telnet $using:SerialIP -P $using:SerialPort | TimeStamp
}
finally {
$sw.ForEach('Flush')
$sw.ForEach('Dispose')
}
}
}
$job = CaptureWeight # For testing, save the job
Start-Sleep -Seconds 60 # wait 1 minute
$job | Stop-Job # kill the job
Get-Content "$LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt" # Did it work?
And the output is this:
05/09/2022_14:34:19 G+027800 lb
05/09/2022_14:34:20
05/09/2022_14:34:20 G+027820 lb
05/09/2022_14:34:21
05/09/2022_14:34:21 G+027820 lb
05/09/2022_14:34:22
05/09/2022_14:34:22 G+027820 lb
Without the TimeStamp, every other line is blank. I have a couple lines to cleanup the logs, one removes every other line one removes lines with zero weights:
Set-Content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" -Value (get-content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" | Where-Object { $i % 2 -eq 0; $i++ })
Set-Content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" -Value (get-content -Path "$LogDir\WeightLog_$(get-date -f MM-dd-yyyy).txt" | Select-String -Pattern '00000' -NotMatch)
If files get to be too large these can take a while to run, would be nice to not have them written to start with.
Thanks!
Edit, This is what I ended up with:
#****************Serial Scale Weight Logger********************
$LogDir = "c:\ScaleWeightLogger\Logs" # Log File Output Directory
$PlinkDir = "c:\ScaleWeightLogger" # plink.exe Directory
$SerialIP = "1.1.1.1" # Serial Device IP Address
$SerialPort = "10000" # Serial Device Port to Log
$MakeWeight = "000\d\d\d" # Minimum weight to log
[datetime]$JobEndTime = '23:58' # "WeightLog" Job End Time
[datetime]$JobStartTime = '00:02' #Use '8/24/2024 03:00' for a date in the future
# https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_regular_expressions
function StartWeightCapture {
Start-Job -Name WeightLog -ScriptBlock {
filter timestamp {
# Set Output Filter, Do Not Write Blank Lines or Weight Matching...
if([string]::IsNullOrWhiteSpace($_) -or $_ -match $using:MakeWeight) {
# skip it
return
}
# Set TimeStamp Format Filter
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
try {
# Set File Path, Set $true to Append
$sw = [System.IO.StreamWriter]::new("$using:LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt", $true)
# Keep Memory Buffer Clear After Writting
$sw.AutoFlush = $true
# Start plink, Filter Output, Append TimeStamp
& "$using:PlinkDir\plink.exe" -telnet $using:SerialIP -P $using:SerialPort | TimeStamp
}
finally {
# Discard Data After Writing
$sw.ForEach('Flush')
$sw.ForEach('Dispose')
}
}
}
function WeightCaptureEndTime {
[datetime]$CurrentTime = Get-Date
[int]$WaitSeconds = ( $JobEndTime - $CurrentTime ).TotalSeconds
Start-Sleep -Seconds $WaitSeconds
}
function StopWeightCapture {
Stop-Job WeightLog
$AddDaysWhenInPast = 1
[datetime]$CurrentTime = Get-Date
If ($JobStartTime -lt $CurrentTime) { $JobStartTime = $JobStartTime.AddDays($AddDaysWhenInPast) }
[int]$WaitSeconds = ( $JobStartTime - $CurrentTime ).TotalSeconds
Start-Sleep -Seconds $WaitSeconds
}
while ($true) {
StartWeightCapture
WeightCaptureEndTime
StopWeightCapture
}
I'm launching it at boot with:
powershell -windowstyle hidden -ExecutionPolicy bypass "& "C:\ScaleWeightLogger\ScaleWeightLogger.ps1"" & exit
And got this to end it manually since it's in the background. It only grabs the PID of the main powershell process and not the job:
#echo off
for /F "tokens=2" %%K in ('
tasklist /FI "ImageName eq powershell.exe" /FI "Status eq Running" /FO LIST ^| findstr /B "PID:"
') do (
echo "PID is %%K, Ending process..."
taskkill /F /PID %%K
)
pause
exit
If I understand correctly, adding this condition should avoid you the trouble of having to read the logs and skip the unwanted lines.
See String.IsNullOrWhiteSpace(String) Method and -match matching operator for details.
filter timestamp {
# if this output is purely whitespace or it matches `00000`
if([string]::IsNullOrWhiteSpace($_) -or $_ -match '00000') {
# skip it
return
}
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
Regarding the observation noted in previous question:
...when trying view the file while it's running, it seems like it updates (for viewing) about every 2 minutes, you get one 2 minute chunk of data that is about 2 minutes behind, the 2 minutes of data is there...
For this, you can enable the AutoFlush property from your StreamWriter.
Remarks has an excellent explanation of when it's worth enabling this property as well as the performance implications:
When AutoFlush is set to false, StreamWriter will do a limited amount of buffering, both internally and potentially in the encoder from the encoding you passed in. You can get better performance by setting AutoFlush to false, assuming that you always call Close (or at least Flush) when you're done writing with a StreamWriter.
For example, set AutoFlush to true when you are writing to a device where the user expects immediate feedback. Console.Out is one of these cases: The StreamWriter used internally for writing to Console flushes all its internal state except the encoder state after every call to StreamWriter.Write.
$sw = [System.IO.StreamWriter]::new("$using:LogDir\WeightLog_$(Get-Date -f MM-dd-yyyy).txt")
$sw.AutoFlush = $true

Copy-item using invoke-async in Powershell

This article shows how to use Invoke-Async in PowerShell: https://sqljana.wordpress.com/2018/03/16/powershell-sql-server-run-in-parallel-collect-sql-results-with-print-output-from-across-your-sql-farm-fast/
I wish to run in parallel the copy-item cmdlet in PowerShell because the alternative is to use FileSystemObject via Excel and copy one file at a time out of a total of millions of files.
I have cobbled together the following:
.SYNOPSIS
<Brief description>
For examples type:
Get-Help .\<filename>.ps1 -examples
.DESCRIPTION
Copys files from one path to another
.PARAMETER FileList
e.g. C:\path\to\list\of\files\to\copy.txt
.PARAMETER NumCopyThreads
default is 8 (but can be 100 if you want to stress the machine to maximum!)
.EXAMPLE
.\CopyFilesToBackup -filelist C:\path\to\list\of\files\to\copy.txt
.NOTES
#>
[CmdletBinding()]
Param(
[String] $FileList = "C:\temp\copytest.csv",
[int] $NumCopyThreads = 8
)
$filesToCopy = New-Object "System.Collections.Generic.List[fileToCopy]"
$csv = Import-Csv $FileList
foreach($item in $csv)
{
$file = New-Object fileToCopy
$file.SrcFileName = $item.SrcFileName
$file.DestFileName = $item.DestFileName
$filesToCopy.add($file)
}
$sb = [scriptblock] {
param($file)
Copy-item -Path $file.SrcFileName -Destination $file.DestFileName
}
$results = Invoke-Async -Set $filesToCopy -SetParam file -ScriptBlock $sb -Verbose -Measure:$true -ThreadCount 8
$results | Format-Table
Class fileToCopy {
[String]$SrcFileName = ""
[String]$DestFileName = ""
}
the csv input for which looks like this:
SrcFileName,DestFileName
C:\Temp\dummy-data\101438\101438-0154723869.zip,\\backupserver\Project Archives\101438\0154723869.zip
C:\Temp\dummy-data\101438\101438-0165498273.xlsx,\\backupserver\Project Archives\101438\0165498273.xlsx
What am I missing to get this working, because when I run .\CopyFiles.ps1 -FileList C:\Temp\test.csv nothing happens. The files exist in the source path, but the file objects aren't being pulled from the -Set collection. (Unless I have misunderstood how the collection is used?)
No, I can't use robocopy to do this because there are millions of files which resolve to different paths depending upon their original location.
I have no explanation for your symptom based on the code in your question (see bottom section), but I suggest basing your solution on the (now) standard Start-ThreadJob cmdlet (comes with PowerShell Core; in Windows PowerShell, install it with Install-Module ThreadJob -Scope CurrentUser, for instance[1]):
Such a solution is more efficient than use of the third-party Invoke-Async function, which as of this writing is flawed in that it waits for jobs to finish in a tight loop, which creates unnecessary processing overhead.
Start-ThreadJob jobs are a lightweight, thread-based alternative to the process-based Start-Job background jobs, yet they integrate with the standard job-management cmdlets, such as Wait-Job and Receive-Job.
Here's a self-contained example based on your code that demonstrates its use:
Note: Whether you use Start-ThreadJob or Invoke-Async, you won't be able to explicit reference custom classes such as [fileToCopy] in the script block that runs in separate threads (runspaces; see bottom section), so the solution below simply uses [pscustomobject] instances with the properties of interest for simplicity and brevity.
# Create sample CSV file with 10 rows.
$FileList = Join-Path ([IO.Path]::GetTempPath()) "tmp.$PID.csv"
#'
Foo,SrcFileName,DestFileName,Bar
1,c:\tmp\a,\\server\share\a,baz
2,c:\tmp\b,\\server\share\b,baz
3,c:\tmp\c,\\server\share\c,baz
4,c:\tmp\d,\\server\share\d,baz
5,c:\tmp\e,\\server\share\e,baz
6,c:\tmp\f,\\server\share\f,baz
7,c:\tmp\g,\\server\share\g,baz
8,c:\tmp\h,\\server\share\h,baz
9,c:\tmp\i,\\server\share\i,baz
10,c:\tmp\j,\\server\share\j,baz
'# | Set-Content $FileList
# How many threads at most to run concurrently.
$NumCopyThreads = 8
Write-Host 'Creating jobs...'
$dtStart = [datetime]::UtcNow
# Import the CSV data and transform it to [pscustomobject] instances
# with only .SrcFileName and .DestFileName properties - they take
# the place of your original [fileToCopy] instances.
$jobs = Import-Csv $FileList | Select-Object SrcFileName, DestFileName |
ForEach-Object {
# Start the thread job for the file pair at hand.
Start-ThreadJob -ThrottleLimit $NumCopyThreads -ArgumentList $_ {
param($f)
$simulatedRuntimeMs = 2000 # How long each job (thread) should run for.
# Delay output for a random period.
$randomSleepPeriodMs = Get-Random -Minimum 100 -Maximum $simulatedRuntimeMs
Start-Sleep -Milliseconds $randomSleepPeriodMs
# Produce output.
"Copied $($f.SrcFileName) to $($f.DestFileName)"
# Wait for the remainder of the simulated runtime.
Start-Sleep -Milliseconds ($simulatedRuntimeMs - $randomSleepPeriodMs)
}
}
Write-Host "Waiting for $($jobs.Count) jobs to complete..."
# Synchronously wait for all jobs (threads) to finish and output their results
# *as they become available*, then remove the jobs.
# NOTE: Output will typically NOT be in input order.
Receive-Job -Job $jobs -Wait -AutoRemoveJob
Write-Host "Total time lapsed: $([datetime]::UtcNow - $dtStart)"
# Clean up the temp. file
Remove-Item $FileList
The above yields something like:
Creating jobs...
Waiting for 10 jobs to complete...
Copied c:\tmp\b to \\server\share\b
Copied c:\tmp\g to \\server\share\g
Copied c:\tmp\d to \\server\share\d
Copied c:\tmp\f to \\server\share\f
Copied c:\tmp\e to \\server\share\e
Copied c:\tmp\h to \\server\share\h
Copied c:\tmp\c to \\server\share\c
Copied c:\tmp\a to \\server\share\a
Copied c:\tmp\j to \\server\share\j
Copied c:\tmp\i to \\server\share\i
Total time lapsed: 00:00:05.1961541
Note that the output received does not reflect the input order, and that the overall runtime is roughly 2 times the per-thread runtime of 2 seconds (plus overhead), because 2 "batches" have to be run due to the input count being 10, whereas only 8 threads were made available.
If you upped the thread count to 10 or more (50 is the default), the overall runtime would drop to 2 seconds plus overhead, because all jobs then run concurrently.
Caveat: The above numbers stem from running in PowerShell Core, version on Microsoft Windows 10 Pro (64-bit; Version 1903), using version 2.0.1 of the ThreadJob module.
Inexplicably, the same code is much slower in Windows PowerShell, v5.1.18362.145.
However, for performance and memory consumption it is better to use batching (chunking) in your case, i.e, to process multiple file pairs per thread.
The following solution demonstrates this approach; tweak $chunkSize to find a batch size that works for you.
# Create sample CSV file with 10 rows.
$FileList = Join-Path ([IO.Path]::GetTempPath()) "tmp.$PID.csv"
#'
Foo,SrcFileName,DestFileName,Bar
1,c:\tmp\a,\\server\share\a,baz
2,c:\tmp\b,\\server\share\b,baz
3,c:\tmp\c,\\server\share\c,baz
4,c:\tmp\d,\\server\share\d,baz
5,c:\tmp\e,\\server\share\e,baz
6,c:\tmp\f,\\server\share\f,baz
7,c:\tmp\g,\\server\share\g,baz
8,c:\tmp\h,\\server\share\h,baz
9,c:\tmp\i,\\server\share\i,baz
10,c:\tmp\j,\\server\share\j,baz
'# | Set-Content $FileList
# How many threads at most to run concurrently.
$NumCopyThreads = 8
# How many files to process per thread
$chunkSize = 3
# The script block to run in each thread, which now receives a
# $chunkSize-sized *array* of file pairs.
$jobScriptBlock = {
param([pscustomobject[]] $filePairs)
$simulatedRuntimeMs = 2000 # How long each job (thread) should run for.
# Delay output for a random period.
$randomSleepPeriodMs = Get-Random -Minimum 100 -Maximum $simulatedRuntimeMs
Start-Sleep -Milliseconds $randomSleepPeriodMs
# Produce output for each pair.
foreach ($filePair in $filePairs) {
"Copied $($filePair.SrcFileName) to $($filePair.DestFileName)"
}
# Wait for the remainder of the simulated runtime.
Start-Sleep -Milliseconds ($simulatedRuntimeMs - $randomSleepPeriodMs)
}
Write-Host 'Creating jobs...'
$dtStart = [datetime]::UtcNow
$jobs = & {
# Process the input objects in chunks.
$i = 0
$chunk = [pscustomobject[]]::new($chunkSize)
Import-Csv $FileList | Select-Object SrcFileName, DestFileName | ForEach-Object {
$chunk[$i % $chunkSize] = $_
if (++$i % $chunkSize -ne 0) { return }
# Note the need to wrap $chunk in a single-element helper array (, $chunk)
# to ensure that it is passed *as a whole* to the script block.
Start-ThreadJob -ThrottleLimit $NumCopyThreads -ArgumentList (, $chunk) -ScriptBlock $jobScriptBlock
$chunk = [pscustomobject[]]::new($chunkSize) # we must create a new array
}
# Process any remaining objects.
# Note: $chunk -ne $null returns those elements in $chunk, if any, that are non-null
if ($remainingChunk = $chunk -ne $null) {
Start-ThreadJob -ThrottleLimit $NumCopyThreads -ArgumentList (, $remainingChunk) -ScriptBlock $jobScriptBlock
}
}
Write-Host "Waiting for $($jobs.Count) jobs to complete..."
# Synchronously wait for all jobs (threads) to finish and output their results
# *as they become available*, then remove the jobs.
# NOTE: Output will typically NOT be in input order.
Receive-Job -Job $jobs -Wait -AutoRemoveJob
Write-Host "Total time lapsed: $([datetime]::UtcNow - $dtStart)"
# Clean up the temp. file
Remove-Item $FileList
While the output is effectively the same, note how only 4 jobs were created this time, each of which processed (up to) $chunkSize (3) file pairs.
As for what you tried:
The screen shot you show suggests that the problem is that your custom class, [fileToCopy], isn't visible to the script block run by Invoke-Async.
Since Invoke-Async invokes the script block via the PowerShell SDK in separate runspaces that know nothing about the caller's state, it is to be expected that these runspaces don't know your class (this equally applies to Start-ThreadJob).
However, it is unclear why that is a problem in your code, because your script block doesn't make an explicit reference to you class: your script-block parameter $file is not type-constrained (it is implicitly [object]-typed).
Therefore, simply accessing the properties of your custom-class instance inside the script block should work, and indeed does in my tests on Windows PowerShell v5.1.18362.145 on Microsoft Windows 10 Pro (64-bit; Version 1903).
However, if your real script-block code were to explicitly reference custom class [fileToCopy] - such as by defining the parameter as param([fileToToCopy] $file) - you would see the symptom.
[1] In Windows PowerShell v3 and v4, which do not come with the PowerShellGet module, Install-Module isn't available by default. However, the module can be installed on demand, as described in Installing PowerShellGet.

Remotely extend a partition using WMI

I'm trying to use PowerShell and WMI to remotely extend a C drive partition on Windows VMs running on VMware.
These VM do not have WinRM enabled and that's not an option.
What I'm trying to do is an equivalent of remotely managing an Active Directory computer object in an AD console to extend a partition, but in PowerShell.
I'v already managed to pull partition informations through Win32 WMI objects but not yet the extension part.
Does anyone know how to max out a C partition on a drive like that?
Pre-requisites:
PsExec from SysInternals Suite
PowerShell 2.0 or greater for PowerShell modules feature on the remote computer(s)
First, enable PSRemoting via PsExec:
psexec \\[computer name] -u [admin account name] -p [admin account password] -h -d powershell.exe "enable-psremoting -force"
The following PowerShell script will do the trick, without WMI, via PowerShell Sessions instead, and will do it for as many computers as you want:
Here is the driver script:
$computerNames = #("computer1", "computer2");
$computerNames | foreach {
$session = New-PSSession -ComputerName $_;
Invoke-Command -Session $session -FilePath c:\path\to\Expand-AllPartitionsOnAllDisks.ps1
Remove-PSSession $session
}
And here is Expand-AllPartitionsOnAllDisks.ps1:
Import-Module Storage;
$disks = Get-Disk | Where FriendlyName -ne "Msft Virtual Disk";
foreach ($disk in $disks)
{
$DiskNumber = $disk.DiskNumber;
$Partition = Get-Partition -DiskNumber $disk.DiskNumber;
$PartitionActualSize = $Partition.Size;
$DriveLetter = $Partition.DriveLetter;
$PartitionNumber = $Partition.PartitionNumber
$PartitionSupportedSize = Get-PartitionSupportedSize -DiskNumber $DiskNumber -PartitionNumber $PartitionNumber;
if ($disk.IsReadOnly)
{
Write-Host -ForegroundColor DarkYellow "Skipping drive letter [$DriveLetter] partition number [$PartitionNumber] on disk number [$DiskNumber] because the disk is read-only!";
continue;
}
if ($PartitionActualSize -lt $PartitionSupportedSize.SizeMax) {
# Actual Size will be greater than the partition supported size if the underlying Disk is "maxed out".
# For example, on a 50GB Volume, if all the Disk is partitioned, the SizeMax on the partition will be 53684994048.
# However, the full Size of the Disk, inclusive of unpartition space, will be 53687091200.
# In other words, it will still be more than partition and unlikely to ever equal the partition's MaxSize.
Write-Host -ForegroundColor Yellow "Resizing drive letter [$DriveLetter] partition number [$PartitionNumber] on disk number [$DiskNumber] because `$PartitionActualSize [$PartitionActualSize] is less than `$PartitionSupportedSize.SizeMax [$($PartitionSupportedSize.SizeMax)]"
Resize-Partition -DiskNumber $DiskNumber -PartitionNumber $PartitionNumber -Size $PartitionSupportedSize.SizeMax -Confirm:$false -ErrorAction SilentlyContinue -ErrorVariable resizeError
Write-Host -ForegroundColor Green $resizeError
}
else {
Write-Host -ForegroundColor White "The partition is already the requested size, skipping...";
}
}
See also my related research into doing this:
https://serverfault.com/questions/946676/how-do-i-use-get-physicalextent-on-get-physicaldisk
https://stackoverflow.com/a/4814168/1040437 - Solution using diskpart, requires knowing the volume number

RDS User logoff Script Slow

With the help of the several online articles I was able to compile a powershell script that logs off all users for each of my RD Session hosts. I wanted something to be really gentle on logging off users and it writing profiles back to their roaming profile location on the storage system. However, this is too gentle and takes around four hours to complete with the amount of users and RDS servers I have.
This script is designed to set each RDS server drain but allow redirection if a server is available so the thought around this was within the first 15 minutes I would have the first few servers ready for users to log into.
All of this works but I would like like to see if there are any suggestions on speeding this up a little.
Here is the loop that goes through each server and logs users out and then sets the server logon mode to enabled:
ForEach ($rdsserver in $rdsservers){
try {
query user /server:$rdsserver 2>&1 | select -skip 1 | ? {($_ -split "\s+")[-5]} | % {logoff ($_ -split "\s+")[-6] /server:$rdsserver /V}
Write-Host "Giving the RDS Server time"
Write-Progress "Pausing Script" -status "Giving $rdsserver time to settle" -perc (5/(5/100))
Start-Sleep -Seconds 5
$RDSH=Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace "root\CIMV2\terminalservices" -ComputerName $rdsserver -Authentication PacketPrivacy -Impersonation Impersonate
$RDSH.SessionBrokerDrainMode=0
$RDSH.put() > $null
Write-Host "$rdsserver is set to:"
switch ($RDSH.SessionBrokerDrainMode) {
0 {"Allow all connections."}
1 {"Allow incoming reconnections but until reboot prohibit new connections."}
2 {"Allow incoming reconnections but prohibit new connections."}
default {"The user logon state cannot be determined."}
}
}
catch {}
}
Not sure how many Servers you have but if its less than 50 or so you can do this in parallel with PSJobs. You'll have to wrap your code in a scriptblock, launch each server as a separate job, then wait for them to complete and retrieve any data returned. You won't be able to use Write-Host when doing this but I've swapped those to Out-Files. I also didn't parse out your code for collecting your list of servers but I'm going to assume that works and you can have it return a formatted list to a variable $rdsservers. You'll probably also want to modify the messages a bit so you can tell which server is which in the log file, or do different logs for each server. If you want anything other than the names of jobs to hit the console you'll have to output it with Write-Output or a return statement.
$SB = {
param($rdsserver)
Start-Sleep -Seconds 5
$RDSH=Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace "root\CIMV2\terminalservices" -ComputerName $rdsserver -Authentication PacketPrivacy -Impersonation Impersonate
$RDSH.SessionBrokerDrainMode=0
$RDSH.put() > $null
"$rdsserver is set to:" | out-file $LogPath #Set this to whatever you want
switch ($RDSH.SessionBrokerDrainMode) {
0 {"Allow all connections." | out-file $LogPath}
1 {"Allow incoming reconnections but until reboot prohibit new connections." | out-file $LogPath}
2 {"Allow incoming reconnections but prohibit new connections." | out-file $LogPath}
default {"The user logon state cannot be determined." | out-file $LogPath}
}
foreach ($server in $rdsservers){
Start-Job -Scriptblock -ArgumentList $server
}
Get-Job | Wait-Job | Receive-Job
The foreach loop launches the jobs and then the last line waits for all of them to complete before getting any data that was output. You can also set a timeout on the wait if there is a chance your script never completes. If you've got a ton of boxes you may want to look into runspaces over jobs as they have better performance but take more work to use. This Link can help you out if you decide to go that way. I don't have an RDS deployment at the moment to test on so if you get any errors or have trouble getting it to work just post a comment and I'll see what I can do.
I have something ready for testing but it may break fantastically. You wizards out there may look at this and laugh. If i did this wrong please let me know.
$Serverperbatch = 2
$job = 0
$job = $Serverperbatch - 1
$batch = 1
While ($job -lt $rdsservers.count) {
$ServerBatch = $rdsservers[$job .. $job]
$jobname = "batch$batch"
Start-job -Name $jobname -ScriptBlock {
param ([string[]]$rdsservers)
Foreach ($rdsserver in $rdsservers) {
try {
query user /server:$rdsserver 2>&1 | select -skip 1 | ? {($_ -split "\s+")[-5]} | % {logoff ($_ -split "\s+")[-6] /server:$rdsserver /V}
$RDSH=Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace "root\CIMV2\terminalservices" -ComputerName $rdsserver -Authentication PacketPrivacy -Impersonation Impersonate
$RDSH.SessionBrokerDrainMode=0
$RDSH.put() > $null
}
catch {}
} -ArgumentList (.$serverbatch)
$batch += 1
$Job = $job + 1
$job += $serverperbatch
If ($Job -gt $rdsservers.Count) {$Job = $rdsservers.Count}
If ($Job -gt $rdsservers.Count) {$Job = $rdsservers.Count}
}
}
Get-Job | Wait-Job | Receive-Job

How to run a command against multiple servers simultaneously in Powershell

I am looking for a way to restart three services on multiple servers simultaneously. I know how to restart services against a list of servers by using a loop but as I have many servers it would take a long time to wait for each service on each server to restart in a sequential order. Is there a way to send restart service command to all servers at once instead of waiting for each server?
You could try to work with jobs. Jobs are run in the background and you have to retrieve them with Get-Job to see their status. Please read the information to Powershell jobs on these two sites:
http://msdn.microsoft.com/en-us/library/dd878288%28v=vs.85%29.aspx
http://technet.microsoft.com/de-DE/library/hh847783.aspx
Your code would look something like this:
$servernames | ForEach-Object {Start-Job -Name "Job-$_" -Scriptblock {"Enter your code here -Computername $_"}}
This will create a background job for each servername. As already mentioned you can see the status using the cmdlet Get-Job. To get the result use the cmdlet Receive-Job.
you can use the invoke-command cmdlet
invoke-command -computername computer1,computer2,computer3 {restart-service servicename}
I use and improove a multi-thread Function, you can use it like :
$Script = {
param($Computername)
restart-service servicename -Computername $Computername
}
#('Srv1','Srv2') | Run-Parallel -ScriptBlock $Script
include this code in your script
function Run-Parallel {
<#
.Synopsis
This is a quick and open-ended script multi-threader searcher
http://www.get-blog.com/?p=189#comment-28834
Improove by Alban LOPEZ 2016
.Description
This script will allow any general, external script to be multithreaded by providing a single
argument to that script and opening it in a seperate thread. It works as a filter in the
pipeline, or as a standalone script. It will read the argument either from the pipeline
or from a filename provided. It will send the results of the child script down the pipeline,
so it is best to use a script that returns some sort of object.
.PARAMETER ScriptBlock
This is where you provide the PowerShell ScriptBlock that you want to multithread.
.PARAMETER ItemObj
The ItemObj represents the arguments that are provided to the child script. This is an open ended
argument and can take a single object from the pipeline, an array, a collection, or a file name. The
multithreading script does it's best to find out which you have provided and handle it as such.
If you would like to provide a file, then the file is read with one object on each line and will
be provided as is to the script you are running as a string. If this is not desired, then use an array.
.PARAMETER InputParam
This allows you to specify the parameter for which your input objects are to be evaluated. As an example,
if you were to provide a computer name to the Get-Process cmdlet as just an argument, it would attempt to
find all processes where the name was the provided computername and fail. You need to specify that the
parameter that you are providing is the "ComputerName".
.PARAMETER AddParam
This allows you to specify additional parameters to the running command. For instance, if you are trying
to find the status of the "BITS" service on all servers in your list, you will need to specify the "Name"
parameter. This command takes a hash pair formatted as follows:
#{"key" = "Value"}
#{"key1" = "Value"; "key2" = 321; "key3" = 1..9}
.PARAMETER AddSwitch
This allows you to add additional switches to the command you are running. For instance, you may want
to include "RequiredServices" to the "Get-Service" cmdlet. This parameter will take a single string, or
an aray of strings as follows:
"RequiredServices"
#("RequiredServices", "DependentServices")
.PARAMETER MaxThreads
This is the maximum number of threads to run at any given time. If ressources are too congested try lowering
this number. The default value is 20.
.PARAMETER SleepTimer_ms
This is the time between cycles of the child process detection cycle. The default value is 200ms. If CPU
utilization is high then you can consider increasing this delay. If the child script takes a long time to
run, then you might increase this value to around 1000 (or 1 second in the detection cycle).
.PARAMETER TimeOutGlobal
this is the TimeOut in second for listen the last thread, after this timeOut All thread are closed, only each other are returned
.PARAMETER TimeOutThread
this is the TimeOut in second for each thread, the thread are aborted at this time
.PARAMETER PSModules
List of PSModule name to include for use in ScriptBlock
.PARAMETER PSSapins
List of PSSapin name to include for use in ScriptBlock
.EXAMPLE
1..20 | Run-Parallel -ScriptBlock {param($i) Start-Sleep $i; "> $i sec <"} -TimeOutGlobal 15 -TimeOutThread 5
.EXAMPLE
Both of these will execute the scriptBlock and provide each of the server names in AllServers.txt
while providing the results to GridView. The results will be the output of the child script.
gc AllServers.txt | Run-Parallel $ScriptBlock_GetTSUsers -MaxThreads $findOut_AD.ActiveDirectory.Servers.count -PSModules 'PSTerminalServices' | out-gridview
#>
Param(
[Parameter(ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)]
$ItemObj,
[ScriptBlock]$ScriptBlock = $null,
$InputParam = $Null,
[HashTable] $AddParam = #{},
[Array] $AddSwitch = #(),
$MaxThreads = 20,
$SleepTimer_ms = 100,
$TimeOutGlobal = 300,
$TimeOutThread = 100,
[string[]]$PSSapins = $null,
[string[]]$PSModules = $null,
$Modedebug = $true
)
Begin{
$ISS = [system.management.automation.runspaces.initialsessionstate]::CreateDefault()
ForEach ($Snapin in $PSSapins){
[void]$ISS.ImportPSSnapIn($Snapin, [ref]$null)
}
ForEach ($Module in $PSModules){
[void]$ISS.ImportPSModule($Module)
}
$RunspacePool = [runspacefactory]::CreateRunspacePool(1, $MaxThreads, $ISS, $Host)
$RunspacePool.CleanupInterval=1000
$RunspacePool.Open()
$Jobs = #()
}
Process{
#ForEach ($Object in $ItemObj){
if ($ItemObj){
Write-Host $ItemObj -ForegroundColor Yellow
$PowershellThread = [powershell]::Create().AddScript($ScriptBlock)
If ($InputParam -ne $Null){
$PowershellThread.AddParameter($InputParam, $ItemObj.ToString()) | out-null
}Else{
$PowershellThread.AddArgument($ItemObj.ToString()) | out-null
}
ForEach($Key in $AddParam.Keys){
$PowershellThread.AddParameter($Key, $AddParam.$key) | out-null
}
ForEach($Switch in $AddSwitch){
$PowershellThread.AddParameter($Switch) | out-null
}
$PowershellThread.RunspacePool = $RunspacePool
$Handle = $PowershellThread.BeginInvoke()
$Job = [pscustomobject][ordered]#{
Handle = $Handle
Thread = $PowershellThread
object = $ItemObj.ToString()
Started = Get-Date
}
$Jobs += $Job
}
#}
}
End{
$GlobalStartTime = Get-Date
$continue = $true
While (#($Jobs | Where-Object {$_.Handle -ne $Null}).count -gt 0 -and $continue) {
ForEach ($Job in $($Jobs | Where-Object {$_.Handle.IsCompleted -eq $True})){
$out = $Job.Thread.EndInvoke($Job.Handle)
$out # return vers la sortie srandard
#Write-Host $out -ForegroundColor green
$Job.Thread.Dispose() | Out-Null
$Job.Thread = $Null
$Job.Handle = $Null
}
foreach ($InProgress in $($Jobs | Where-Object {$_.Handle})) {
if ($TimeOutGlobal -and (($(Get-Date) - $GlobalStartTime).totalseconds -gt $TimeOutGlobal)){
$Continue = $false
#Write-Host $InProgress -ForegroundColor magenta
}
if (!$Continue -or ($TimeOutThread -and (($(Get-Date) - $InProgress.Started).totalseconds -gt $TimeOutThread))) {
$InProgress.thread.Stop() | Out-Null
$InProgress.thread.Dispose() | Out-Null
$InProgress.Thread = $Null
$InProgress.Handle = $Null
#Write-Host $InProgress -ForegroundColor red
}
}
Start-Sleep -Milliseconds $SleepTimer_ms
}
$RunspacePool.Close() | Out-Null
$RunspacePool.Dispose() | Out-Null
}
}