Remotely extend a partition using WMI - powershell

I'm trying to use PowerShell and WMI to remotely extend a C drive partition on Windows VMs running on VMware.
These VM do not have WinRM enabled and that's not an option.
What I'm trying to do is an equivalent of remotely managing an Active Directory computer object in an AD console to extend a partition, but in PowerShell.
I'v already managed to pull partition informations through Win32 WMI objects but not yet the extension part.
Does anyone know how to max out a C partition on a drive like that?

Pre-requisites:
PsExec from SysInternals Suite
PowerShell 2.0 or greater for PowerShell modules feature on the remote computer(s)
First, enable PSRemoting via PsExec:
psexec \\[computer name] -u [admin account name] -p [admin account password] -h -d powershell.exe "enable-psremoting -force"
The following PowerShell script will do the trick, without WMI, via PowerShell Sessions instead, and will do it for as many computers as you want:
Here is the driver script:
$computerNames = #("computer1", "computer2");
$computerNames | foreach {
$session = New-PSSession -ComputerName $_;
Invoke-Command -Session $session -FilePath c:\path\to\Expand-AllPartitionsOnAllDisks.ps1
Remove-PSSession $session
}
And here is Expand-AllPartitionsOnAllDisks.ps1:
Import-Module Storage;
$disks = Get-Disk | Where FriendlyName -ne "Msft Virtual Disk";
foreach ($disk in $disks)
{
$DiskNumber = $disk.DiskNumber;
$Partition = Get-Partition -DiskNumber $disk.DiskNumber;
$PartitionActualSize = $Partition.Size;
$DriveLetter = $Partition.DriveLetter;
$PartitionNumber = $Partition.PartitionNumber
$PartitionSupportedSize = Get-PartitionSupportedSize -DiskNumber $DiskNumber -PartitionNumber $PartitionNumber;
if ($disk.IsReadOnly)
{
Write-Host -ForegroundColor DarkYellow "Skipping drive letter [$DriveLetter] partition number [$PartitionNumber] on disk number [$DiskNumber] because the disk is read-only!";
continue;
}
if ($PartitionActualSize -lt $PartitionSupportedSize.SizeMax) {
# Actual Size will be greater than the partition supported size if the underlying Disk is "maxed out".
# For example, on a 50GB Volume, if all the Disk is partitioned, the SizeMax on the partition will be 53684994048.
# However, the full Size of the Disk, inclusive of unpartition space, will be 53687091200.
# In other words, it will still be more than partition and unlikely to ever equal the partition's MaxSize.
Write-Host -ForegroundColor Yellow "Resizing drive letter [$DriveLetter] partition number [$PartitionNumber] on disk number [$DiskNumber] because `$PartitionActualSize [$PartitionActualSize] is less than `$PartitionSupportedSize.SizeMax [$($PartitionSupportedSize.SizeMax)]"
Resize-Partition -DiskNumber $DiskNumber -PartitionNumber $PartitionNumber -Size $PartitionSupportedSize.SizeMax -Confirm:$false -ErrorAction SilentlyContinue -ErrorVariable resizeError
Write-Host -ForegroundColor Green $resizeError
}
else {
Write-Host -ForegroundColor White "The partition is already the requested size, skipping...";
}
}
See also my related research into doing this:
https://serverfault.com/questions/946676/how-do-i-use-get-physicalextent-on-get-physicaldisk
https://stackoverflow.com/a/4814168/1040437 - Solution using diskpart, requires knowing the volume number

Related

PowerShell function to check the health of remote disks not actually remoting

I am trying to use a script to check the health of physical disks on Lenovo computers utilizing the storcli tool. I found this and have tried to modify it to be a function and allow the input of remote computers and eventually use Get-Content to input a server list. For whatever reason it takes the computer input from -ComputerName but does not actually run the commands on the remote computer. It seems to just read the disks on the local machine and always reports back "Healthy" while I know there are bad disks on the remote machine. Also I have run the script on the machine with the bad disks and it does work and reports the disk as failed. Could anyone offer any insight into what I am missing for this to actually check the remote machines? Remoting is enabled as I can run other scripts without issue. Thank you in advance.
Function Get-DriveStatus {
[cmdletbinding()]
param(
[string[]]$computername = $env:computername,
[string]$StorCLILocation = "C:\LenovoToolkit\StorCli64.exe",
[string]$StorCliCommand = "/c0/eall/sall show j"
)
foreach ($computer in $computername) {
try {
$ExecuteStoreCLI = & $StorCliLocation $StorCliCommand | out-string
$ArrayStorCLI= ConvertFrom-Json $ExecuteStoreCLI
}catch{
$ScriptError = "StorCli Command has Failed: $($_.Exception.Message)"
exit
}
foreach($PhysicalDrive in $ArrayStorCLI.Controllers.'Response Data'.'Drive Information'){
if(($($PhysicalDrive.State) -ne "Onln") -and ($($PhysicalDrive.State -ne "GHS"))) {
$RAIDStatus += "Physical Drive $($PhysicalDrive.'DID') With Size $($PhysicalDrive.'Size') is $($PhysicalDrive.State)`n"
}
}
#If the variables are not set, We’re setting them to a “Healthy” state as our final action.
if (!$RAIDStatus) { $RAIDStatus = "Healthy" }
if (!$ScriptError) { $ScriptError = "Healthy" }
if ($ScriptError -eq "Healthy")
{
Write-Host $computer $RAIDStatus
}
else
{
Write-Host $computer "Error: ".$ScriptError
}
}#End foreach $computer
}#End function
$RAIDStatus = $null
$ScriptError = $null

How can I check if the PowerShell profile script is running from an SSH session?

I'm trying to work around a bug in Win32-OpenSSH, where -NoProfile -NoLogo is not respected when using pwsh.exe (Core) and logging in remotely via SSH/SCP. One way (of several) I tried, was to add the following in the very beginning of my Microsoft.PowerShell_profile.ps1 profile.
function IsInteractive {
$non_interactive = '-command', '-c', '-encodedcommand', '-e', '-ec', '-file', '-f'
-not ([Environment]::GetCommandLineArgs() | Where-Object -FilterScript {$PSItem -in $non_interactive})
}
# No point of running this script if not interactive
if (-not (IsInteractive)) {
exit
}
...
However, this didn't work with a remote SSH, because when using [Environment]::GetCommandLineArgs() with pwsh.exe, all you get back is:
C:\Program Files\PowerShell\6\pwsh.dll
regardless whether or not you are in an interactive session.
Another way I tried, was to scan through the process tree and look for the sshd parent, but that was also inconclusive, since it may run in another thread where sshd is not found as a parent.
So then I tried looking for other things. For example conhost. But on one machine conhost starts before pwsh, whereas on another machine, it starts after...then you need to scan up the tree and maybe find an explorer instance, in which case it is just a positive that the previous process is interactive, but not a definite non-interactive current process session.
function showit() {
$isInter = 'conhost','explorer','wininit','Idle',
$noInter = 'sshd','pwsh','powershell'
$CPID = ((Get-Process -Id $PID).Id)
for (;;) {
$PNAME = ((Get-Process -Id $CPID).Name)
Write-Host ("Process: {0,6} {1} " -f $CPID, $PNAME) -fore Red -NoNewline
$CPID = try { ((gwmi win32_process -Filter "processid='$CPID'").ParentProcessId) } catch { ((Get-Process -Id $CPID).Parent.Id) }
if ($PNAME -eq "conhost") {
Write-Host ": interactive" -fore Cyan
break;
}
if ( ($PNAME -eq "explorer") -or ($PNAME -eq "init") -or ($PNAME -eq "sshd") ) {
# Write-Host ": non-interactive" -fore Cyan
break;
}
""
}
}
How can I check if the profile script is running from within a remote SSH session?
Why am I doing this? Because I want to disable the script from running automatically through SSH/SCP/SFTP, while still being able to run it manually (still over SSH.) In Bash this is a trivial one-liner.
Some related (but unhelpful) answers:
Powershell test for noninteractive mode
How to check if a Powershell script is running remotely

RDS User logoff Script Slow

With the help of the several online articles I was able to compile a powershell script that logs off all users for each of my RD Session hosts. I wanted something to be really gentle on logging off users and it writing profiles back to their roaming profile location on the storage system. However, this is too gentle and takes around four hours to complete with the amount of users and RDS servers I have.
This script is designed to set each RDS server drain but allow redirection if a server is available so the thought around this was within the first 15 minutes I would have the first few servers ready for users to log into.
All of this works but I would like like to see if there are any suggestions on speeding this up a little.
Here is the loop that goes through each server and logs users out and then sets the server logon mode to enabled:
ForEach ($rdsserver in $rdsservers){
try {
query user /server:$rdsserver 2>&1 | select -skip 1 | ? {($_ -split "\s+")[-5]} | % {logoff ($_ -split "\s+")[-6] /server:$rdsserver /V}
Write-Host "Giving the RDS Server time"
Write-Progress "Pausing Script" -status "Giving $rdsserver time to settle" -perc (5/(5/100))
Start-Sleep -Seconds 5
$RDSH=Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace "root\CIMV2\terminalservices" -ComputerName $rdsserver -Authentication PacketPrivacy -Impersonation Impersonate
$RDSH.SessionBrokerDrainMode=0
$RDSH.put() > $null
Write-Host "$rdsserver is set to:"
switch ($RDSH.SessionBrokerDrainMode) {
0 {"Allow all connections."}
1 {"Allow incoming reconnections but until reboot prohibit new connections."}
2 {"Allow incoming reconnections but prohibit new connections."}
default {"The user logon state cannot be determined."}
}
}
catch {}
}
Not sure how many Servers you have but if its less than 50 or so you can do this in parallel with PSJobs. You'll have to wrap your code in a scriptblock, launch each server as a separate job, then wait for them to complete and retrieve any data returned. You won't be able to use Write-Host when doing this but I've swapped those to Out-Files. I also didn't parse out your code for collecting your list of servers but I'm going to assume that works and you can have it return a formatted list to a variable $rdsservers. You'll probably also want to modify the messages a bit so you can tell which server is which in the log file, or do different logs for each server. If you want anything other than the names of jobs to hit the console you'll have to output it with Write-Output or a return statement.
$SB = {
param($rdsserver)
Start-Sleep -Seconds 5
$RDSH=Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace "root\CIMV2\terminalservices" -ComputerName $rdsserver -Authentication PacketPrivacy -Impersonation Impersonate
$RDSH.SessionBrokerDrainMode=0
$RDSH.put() > $null
"$rdsserver is set to:" | out-file $LogPath #Set this to whatever you want
switch ($RDSH.SessionBrokerDrainMode) {
0 {"Allow all connections." | out-file $LogPath}
1 {"Allow incoming reconnections but until reboot prohibit new connections." | out-file $LogPath}
2 {"Allow incoming reconnections but prohibit new connections." | out-file $LogPath}
default {"The user logon state cannot be determined." | out-file $LogPath}
}
foreach ($server in $rdsservers){
Start-Job -Scriptblock -ArgumentList $server
}
Get-Job | Wait-Job | Receive-Job
The foreach loop launches the jobs and then the last line waits for all of them to complete before getting any data that was output. You can also set a timeout on the wait if there is a chance your script never completes. If you've got a ton of boxes you may want to look into runspaces over jobs as they have better performance but take more work to use. This Link can help you out if you decide to go that way. I don't have an RDS deployment at the moment to test on so if you get any errors or have trouble getting it to work just post a comment and I'll see what I can do.
I have something ready for testing but it may break fantastically. You wizards out there may look at this and laugh. If i did this wrong please let me know.
$Serverperbatch = 2
$job = 0
$job = $Serverperbatch - 1
$batch = 1
While ($job -lt $rdsservers.count) {
$ServerBatch = $rdsservers[$job .. $job]
$jobname = "batch$batch"
Start-job -Name $jobname -ScriptBlock {
param ([string[]]$rdsservers)
Foreach ($rdsserver in $rdsservers) {
try {
query user /server:$rdsserver 2>&1 | select -skip 1 | ? {($_ -split "\s+")[-5]} | % {logoff ($_ -split "\s+")[-6] /server:$rdsserver /V}
$RDSH=Get-WmiObject -Class "Win32_TerminalServiceSetting" -Namespace "root\CIMV2\terminalservices" -ComputerName $rdsserver -Authentication PacketPrivacy -Impersonation Impersonate
$RDSH.SessionBrokerDrainMode=0
$RDSH.put() > $null
}
catch {}
} -ArgumentList (.$serverbatch)
$batch += 1
$Job = $job + 1
$job += $serverperbatch
If ($Job -gt $rdsservers.Count) {$Job = $rdsservers.Count}
If ($Job -gt $rdsservers.Count) {$Job = $rdsservers.Count}
}
}
Get-Job | Wait-Job | Receive-Job

IF Statement to Verify VLAN Exists in PowerCLI Script

I am writing a PowerCLI script to automate the creation of VMs based on the data within a CSV file and I would like to know how to format an IF statement to check if the VLANs specified already exist to avoid cluttering up the screen with errors.
The section of the script dealing with the VLAN creation in its current format:
New-VM -Name $_.Name -VMHost ($esx | Get-Random) -NumCPU $_.NumCPU -Location $Folder
$list = Get-Cluster $_.Cluster | Get-VMHost
foreach ($esxhost in $list)
{ Get-VirtualSwitch -Name $switch -VMHost $esxhost |
New-VirtualPortgroup -Name "VLAN $($_.VLAN)" -VLANID $($_.VLAN)
}
Write-Host "Wait - propagating VLAN $($_.VLAN) to all hosts" -foreground yellow
Start-Sleep 10
I would like to determine a way to have the script do something like:
IF $_.VLAN exists
Write-host "$_.VLAN already present, proceeding to next step"
ELSE DO{ Get-VirtualSwitch -Name $switch -VMHost $esxhost |
New-VirtualPortgroup -Name "VLAN $($_.VLAN)" -VLANID $($_.VLAN)
}
I don't have much experience in writing these so I was hoping for some assistance on how to
Check whether the VLAN already exists in vSphere on the switch
How to format the IF/ELSE statement properly to avoid cluttering up the PowerCLI window with errors when the script is run
Thank you for any assistance you may provide
EDIT to work for vlan rather than vswitch
You could use get-virtualportgroup for this and check if the names returned contain your vlanid. This won't work for distributed switches as that's a different set of cmdlets.
$host = 'YourHost'
$vlanid = 'YourVlanId'
if ((Get-VirtualPortGroup -host $host).VLanId -contains $vlanid )
{
Write-Output 'vlan present'
}
else
{
Write-Output 'vlan missing'
#your code to create vlan here
}

PowerShell Job throttle blocking the server

I'm experiencing a strange issue with our script server being overloaded and running out of resources. We have a script that copies data from one location to another, this is defined in a large input file that contains over 200 lines of text in the format 'Source path, Destination path'.
We are now in the process of trying to throttle the maximum jobs we kick of at once and I think it's working fine. But for some reason or another we're still running out of resources on the server when the input file contains over 94 lines. This became apparent after some testing.
We tried to upgrade our Windows 2008 R2 server with PowerShell 4.0 to 4 processors and 8 GB of RAM, but no luck. So I assume my throttling isn't working as designed.
Error code:
Insufficient system resources exist to complete the requested service.
The code:
$MaxThreads = 4
$FunctionFeed = Import-Csv -Path $File -Delimiter ',' -Header 'Source', 'Destination'
$Jobs=#()
Function Wait-MaxRunningJobs {
Param (
$Name,
[Int]$MaxThreads
)
Process {
$Running = #($Name | where State -eq Running)
while ($Running.Count -ge $MaxThreads) {
$Finished = Wait-Job -Job $Name -Any
$Running = #($Name | where State -eq Running)
}
}
}
$ScriptBlock = {
Try {
Robocopy.exe $Using:Line.Source $Using:Line.Destination $Using:Line.File /MIR /Z /R:3 /W:15 /NP /MT:8 | Out-File $Using:LogFile
[PSCustomObject]#{
Source = if ($Using:Line.Source) {$Using:Line.Source} else {'NA'}
Target = if ($Using:Line.Destination) {$Using:Line.Destination} else {'NA'}
}
}
Catch {
"Robocopy | ERROR: $($Error[0].Exception.Message)" |
Out-File -LiteralPath $Using:LogFile
throw $($Error[0].Exception.Message)
}
}
ForEach ($Line in $FunctionFeed) {
$LogParams = #{
LogFolder = $LogFolder
Name = $Line.Destination + '.log'
Date = 'ScriptStartTime'
Unique = $True
}
$LogFile = New-LogFileNameHC #LogParams
' ' >> $LogFile # Avoid not being able to write to log
$Jobs += Start-Job -Name RoboCopy -ScriptBlock $ScriptBlock
Wait-MaxRunningJobs -Name $Jobs -MaxThreads $MaxThreads
}
if ($Jobs) {
Wait-Job -Job $Jobs
$JobResults = $Jobs | Receive-Job
}
Am I missing something here? Thank you for your help.
You're using background jobs, which actually run in remote sessions on the local machine. Remote sessions are intentionally resource restricted, according to settings set in the session configuration. You can check the current settings using
Get-PSSessionConfiguration
And adjust the settings to increase the resources available to the sessions with
Set-PSSessionConfiguration
You may need to do some testing to determine exactly what resource limit you're hitting, and what adjustments need to be made for this particular application to work.
Fixed the problem by enlarging the MaxMemoryPerShellMB for remote sessions from 1GB to 2 GB as described here. Keep in mind that Start-Job is using a remote PowerShell session as mjolinor already indicated, so this variable is applicable to PowerShell jobs.
Solution:
# 'System.OutOfMemoryException error message' when running Robocopy and over 94 PowerShell-Jobs:
Get-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB # Default 1024
Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 2048
# Set PowerShell plugins memory from 1 GB to 2 GB
Get-Item WSMan:\localhost\Plugin\Microsoft.PowerShell\Quotas\MaxMemoryPerShellMB # Default 1024
Set-Item WSMan:\localhost\Plugin\Microsoft.PowerShell\Quotas\MaxMemoryPerShellMB 2048
Restart-Service winrm