How to fix: Starting services stored in variables - powershell

I'm currently trying to make a script which does the following:
Get the services which are running
Get all services with startup type automatic, which arent running
Start all services which aren't running, but got startup type automatic.
The script will be run on different windows servers. I already tried to compare the services which should run ($servicestorun), to the one actually running ($servicesrunning), and then start the ones who aren't running but should.
Can someone here point me to the right direction or provide me with code needed to fix it?
$servicesrunning = Get-Service | Where {
$_.StartUpType –eq 'automatic' -and $_.Status –eq 'running'
} # gets running services
$servicestorun = Get-Service | Where {
$_.StartUpType –eq 'automatic' -and
$_.Status –eq 'stopped'
} # gets services with startup type automatic and status stopped
# checks if all services which should run, actually run
if ($servicestorun -eq $servicesrunning) {
echo "all good" # if positive, message all good then exit
exit
} else {
Start-Service $servicestorun # starts all services with startup type automatic and status stopped
}
And the error I'm getting:
Start-Service : Das Argument für den Parameter "InputObject" kann nicht überprüft werden.

Your comparison will always evaluate to "false" for 2 reasons:
PowerShell arrays are objects, and the -eq operator tests for identity of the objects, not if both array objects contain the same elements.
The two lists you're comparing are mutually exclusive, so even if you could compare the arrays using the -eq operator (which, again, you can't) the result would still be "false".
Because of that your code will always jump to the else branch, even if there are no services to start. In that situation you'll pass an empty variable to Start-Service, which results in the error you observed.
Since you want to start all services that are configured for automatic start, but aren't currently running, simply use the pipeline:
Get-Service | Where-Object {
$_.StartUpType –eq 'automatic' -and
$_.Status –ne 'running'
} | Start-Service

Related

How to prevent multiple instances of the same PowerShell 7 script?

Context
On a build server, a PowerShell 7 script script.ps1 will be started and will be running in the background in the remote computer.
What I want
A safenet to ensure that at most 1 instance of the script.ps1 script is running at once on the build server or remote computer, at all times.
What I tried:
I tried meddling with PowerShell 7 background jobs (by executing the script.ps1 as a job inside a wrapper script wrapper.ps1), however that didn't solve the problem as jobs do not carry over (and can't be accessed) in other PowerShell sessions.
What I tried looks like this:
# inside wrapper.ps1
$running_jobs = $(Get-Job -State Running) | Where-Object {$_.Name -eq "ImportantJob"}
if ($running_jobs.count -eq 0) {
Start-Job .\script.ps1 -Name "ImportantJob" -ArgumentList #($some_variables)
} else {
Write-Warning "Could not start new job; Existing job detected must be terminated beforehand."
}
To reiterate, the problem with that is that $running_jobs only returns the jobs running in the current session, so this code only limits one job per session, allowing for multiple instances to be ran if multiple sessions were mistakenly opened.
What I also tried:
I tried to look into Get-CimInstance:
$processes = Get-CimInstance -ClassName Win32_Process | Where-Object {$_.Name -eq "pwsh.exe"}
While this does return the current running PowerShell instances, these elements carry no information on the script that is being executed, as shown after I run:
foreach ($p in $processes) {
$p | Format-List *
}
I'm therefore lost and I feel like I'm missing something.
I appreciate any help or suggestions.
I like to define a config path in the $env:ProgramData location using a CompanyName\ProjectName scheme so I can put "per system" configuration.
You could use a similar scheme with a defined location to store a lock file created when the script run and deleted at the end of it (as suggested already within the comments).
Then, it is up to you to add additional checks if needed (What happen if the script exit prematurely while the lock is still present ?)
Example
# Define default path (Not user specific)
$ConfigLocation = "$Env:ProgramData\CompanyName\ProjectName"
# Create path if it does not exist
New-Item -ItemType Directory -Path $ConfigLocation -EA 0 | Out-Null
$LockFilePath = "$ConfigLocation\Instance.Lock"
$Locked = $null -ne (Get-Item -Path $LockFilePath -EA 0)
if ($Locked) {Exit}
# Lock
New-Item -Path $LockFilePath
# Do stuff
# Remove lock
Remove-Item -Path $LockFilePath
Alternatively, on Windows, you could also use a scheduled task without a schedule and with the setting "If the task is already running, then the following rule applies: Do not start a new instance". From there, instead of calling the original script, you call a proxy script that just launch the scheduled task.

Getting specific app pool's worker process in PowerShell returns value but process is already stopped

I have multiple websites - each on a separate app pool.
The app pool I'm referring to has 1 worker process.
After stopping the app pool, I'm trying to wait and verify that the worker process has stopped.
$appPoolName = $appPool.name;
Write-Host "appPoolName: $appPoolName";
$w3wp = Get-ChildItem "IIS:\AppPools\$appPoolName\WorkerProcesses\";
while($w3wp -and $retrys -gt 0)
{
Write-Host "w3wp value is: $w3wp";
Start-Sleep -s 10;
$retrys--;
$w3wp = Get-ChildItem "IIS:\AppPools\$appPoolName\WorkerProcesses\";
Write-Host "w3wp value(2) is: $w3wp";
if(-not $w3wp)
{
break;
}
}
The print of both values is always "Microsoft.IIs.PowerShell.Framework.ConfigurationElement", even when I see the process is stopped and no longer in Task Manager.
Also strange: When I open another PowerShell session while the code runs and call
$w3wp = Get-ChildItem "IIS:\AppPools\$appPoolName\WorkerProcesses\";
w3wp has no value (because it is no longer exist).
Any ideas why the value isn't changing?
Or maybe how to do that differently?
Thanks in advance :)
I think the IIS: provider is caching data. I dont know of a fix, but heres a couple of alternatives:
use WMI from powershell:
gwmi -NS 'root\WebAdministration' -class 'WorkerProcess' | select AppPoolName,ProcessId
Run appcmd
appcmd list wp

Using PowerShell to identify a machine as a server or PC

I'm trying to write a PowerShell script that will give me a list if of roles and features if run on a server but if run on a client machine will say "Only able to execute command on a server."
I've played around with this script a lot and can get it to run on either a client machine or server (depending on what I've tweaked) but not both. Here's the latest iteration:
$MyOS="wmic os get Caption"
if("$MyOS -contains *Server*") {
Get-WindowsFeature | Where-Object {$_. installstate -eq "installed"
}}else{
echo "Only able to execute command on a server."}
What am I doing wrong?
The quotes around your wmic command will create the $MyOS variable with a String and not execute the command. Still, I would recommend you use native PowerShell commands such as Get-CimInstance. Like the $MyOS variable your if statement condition will always equal true as the quotes will make it a String.
$MyOS = Get-CimInstance Win32_OperatingSystem
if ($MyOS.Caption -like "*Server*") {
Get-WindowsFeature | Where-Object { $_. installstate -eq "installed" }
}
else {
Write-Output "Only able to execute command on a server."
}
You can also use the ProductType property. This is a (UInt32) number with the following values:
1 - Work Station
2 - Domain Controller
3 - Server
$MyOS = (Get-CimInstance Win32_OperatingSystem).ProductType
if ($MyOS -gt 1) {
Get-WindowsFeature | Where-Object { $_. InstallState -eq "installed" }
}
else {
Write-Output "Only able to execute command on a server."
}
Try to use '-like' instead of 'contains', it should work
Generally, I try to avoid pre-checks like this that make assumptions about functionality that may not be true forever. There's no guarantee that Get-WindowsFeature won't start working on client OSes in a future update.
I prefer to just trap errors and proceed accordingly. Unfortunately, this particular command produces a generic Exception rather than a more specifically typed exception. So you can't really do much other than string matching on the error message to verify specifically what happened. But there's very little that can go wrong with this command other than the client OS error. So it's pretty safe to just assume what went wrong if it throws the exception.
try {
Get-WindowsFeature | Where-Object { $_. InstallState -eq "installed" }
} catch {
Write-Warning "Only able to execute command on a server."
}
If you don't want to accidentally hide an error that's not the client OS one, change the warning message to just use the actual text from the error. This also gets you free localization if you happen to be running this code in a location with a different language than your own.
Write-Warning $_.Exception.Message

Execute a different command depending on the output of the previous

I am trying out something which is quite simple, yet I can't find an answer or rather can't figure out how to ask the question on Google.
So I thought it would be better to just show what I'm doing with pictures, here.
Here is the script I'm working on:
What it does is simple: get all virtual machines depending on their state (running, saved or off) and then starting them or stopping them. That is where I am having trouble.
I tried to pipe it with different commands, but it keeps giving an error which is
The input object cannot be bound to any parameters for the command either
because the command does not take pipeline input or the input and its properties do not match any of the parameters that take pipeline input.
So what I want is if the machine are running then save them. Is there a way to do so?
Use a ForEach-Object loop and a switch statement:
Get-VM -VMName $name | ForEach-Object {
switch ($_.State) {
'running' {
# do some
}
'saved' {
# do other
}
'off' {
# do something else
}
default {
throw ('Unrecognized state: {0}' -f $_.State)
}
}
}
I think the actual issue here (shown by the error message) is that start-vm doesn't accept pipeline input. I'm guessing this is the Hyper-V Start-VM cmdlet, by the way.
You could do this to get around the lack of pipeline-aware parameters:
Get-VM -VMName $name | where {$_.State -eq $state} | foreach-object {Start-VM -VM $_}

Determining when machine is in good state for Powershell Remoting?

Update - the original question claimed that I was able to successfully perform an Invoke-Command and then shortly after was unable to; I thought it was due to processes going on during login after a windows upgrade.
It turns out the PC was actually starting, running a quick batch/cmd file, and then restarting. This is what was leading to being able to do PS Remoting and then suddenly not. The restart was quick enough after first boot that I didn't realize it was happening. Sorry for the bad question.
For the curious, the machine was restarting because of a remnant of the Microsoft Deployment Toolkit in-place upgrade process. The way MDT completes its task-sequence post-upgrade is problematic for many reasons, and now I've got another to count.
Old details (no longer relevant, with incorrect assumption that machine was not restarting after first successful Invoke-Command):
I'm automating various things with VMs in Hyper-V using powershell and powershell remoting. I'll start up a VM and then want to run some commands on it via powershell.
I'm struggling with determining when I can safely start running the remote commands via things like Invoke-Command. I can't start immediately as I need to let the machine start up.
Right now I poll the VM with a one second sleep between calls until the following function returns $true:
function VMIsReady {
[CmdletBinding()]
Param(
[Parameter(Mandatory=$True)][object]$VM
)
$heartbeat = $vm.Heartbeat
Write-Host "vm heartbeat is $heartbeat"
if (($heartbeat -eq 'OkApplicationsHealthy') -or ($heartbeat -eq 'OkApplicationsUnknown'))
{
try
{
Invoke-Command -VMName $vm.Name -Credential $(GetVMCredentials) {$env:computername} | out-null
}
catch [System.Management.Automation.RuntimeException]
{
Write-Host 'Caught expected automation runtime exception'
return $false
}
Write-Host 'remoting ready'
return $true
}
}
This usually works well; however, after a windows upgrade has happened, there are issues. I'll get Hyper-V remoting errors of various sorts even after VMIsReady returns $true.
These errors are happening while the VM is in the process of first user login after upgrade (Windows going through "Hi;We've got some updates for your PC;This might take several minutes-Don't turn off your PC). VMIsReady returns true right as this sequence starts - I imagine I probably should be waiting until the sequence is done, but I've no idea how to know when that is.
Is there a better way of determining when the machine is in a state where I can expect remoting to work without issue? Perhaps a way to tell when a user is fully logged on?
You can use Test-WSMan.
Of run a script on the invoke that will receive a response from the server.
[bool]$Response | Out-Null
try{
$Response = Invoke-Command -ComputerName Test-Computer -ScriptBlock {return $true}
}catch{
return $false
}
if ($Response -ne $true){
return $false
}else{
return $true
}