I have tried to restrict multiple executions of the same script in PowerShell. I have tried following code. Now it is working, but a major drawback is that when I close the PowerShell window and try to run the same script again, it will execute once again.
Code:
$history = Get-History
Write-Host "history=" $history.Length
if ($history.Length -gt 0) {
Write-Host "this script already run using History"
return
} else {
Write-Host "First time using history"
}
How can I avoid this drawback?
I presume you want to make sure that a script is not running from different powershell processes, and not from the same one as some sort of self-call.
In either case there isn't anything in powershell for this, so you need to mimic a semaphore.
For the same process, you can leverage a global variable and wrap your script around a try/finally block
$variableName="Something unique"
try
{
if(Get-Variable -Name $variableName -Scope Global -ErrorAction SilentlyContinue)
{
Write-Warning "Script is already executing"
return
}
else
{
Set-Variable -Name $variableName -Value 1 -Scope Global
}
# The rest of the script
}
finally
{
Remove-Variable -Name $variableName -ErrorAction SilentlyContinue
}
Now if you want to do the same, then you need to store something outside of your process. A file would be a good idea with a similar mindset using Test-Path, New-Item and Remove-Item.
In either case, please note that this trick that mimics semaphores, is not as rigid as an actual semaphore and can leak.
Related
Context
On a build server, a PowerShell 7 script script.ps1 will be started and will be running in the background in the remote computer.
What I want
A safenet to ensure that at most 1 instance of the script.ps1 script is running at once on the build server or remote computer, at all times.
What I tried:
I tried meddling with PowerShell 7 background jobs (by executing the script.ps1 as a job inside a wrapper script wrapper.ps1), however that didn't solve the problem as jobs do not carry over (and can't be accessed) in other PowerShell sessions.
What I tried looks like this:
# inside wrapper.ps1
$running_jobs = $(Get-Job -State Running) | Where-Object {$_.Name -eq "ImportantJob"}
if ($running_jobs.count -eq 0) {
Start-Job .\script.ps1 -Name "ImportantJob" -ArgumentList #($some_variables)
} else {
Write-Warning "Could not start new job; Existing job detected must be terminated beforehand."
}
To reiterate, the problem with that is that $running_jobs only returns the jobs running in the current session, so this code only limits one job per session, allowing for multiple instances to be ran if multiple sessions were mistakenly opened.
What I also tried:
I tried to look into Get-CimInstance:
$processes = Get-CimInstance -ClassName Win32_Process | Where-Object {$_.Name -eq "pwsh.exe"}
While this does return the current running PowerShell instances, these elements carry no information on the script that is being executed, as shown after I run:
foreach ($p in $processes) {
$p | Format-List *
}
I'm therefore lost and I feel like I'm missing something.
I appreciate any help or suggestions.
I like to define a config path in the $env:ProgramData location using a CompanyName\ProjectName scheme so I can put "per system" configuration.
You could use a similar scheme with a defined location to store a lock file created when the script run and deleted at the end of it (as suggested already within the comments).
Then, it is up to you to add additional checks if needed (What happen if the script exit prematurely while the lock is still present ?)
Example
# Define default path (Not user specific)
$ConfigLocation = "$Env:ProgramData\CompanyName\ProjectName"
# Create path if it does not exist
New-Item -ItemType Directory -Path $ConfigLocation -EA 0 | Out-Null
$LockFilePath = "$ConfigLocation\Instance.Lock"
$Locked = $null -ne (Get-Item -Path $LockFilePath -EA 0)
if ($Locked) {Exit}
# Lock
New-Item -Path $LockFilePath
# Do stuff
# Remove lock
Remove-Item -Path $LockFilePath
Alternatively, on Windows, you could also use a scheduled task without a schedule and with the setting "If the task is already running, then the following rule applies: Do not start a new instance". From there, instead of calling the original script, you call a proxy script that just launch the scheduled task.
Is there any sane, reliable contract that dictates whether Write-Host is supported in a given PowerShell host implementation, in a script that could be run against any reasonable host implementation?
(Assume that I understand the difference between Write-Host and Write-Output/Write-Verbose and that I definitely do want Write-Host semantics, if supported, for this specific human-readable text.)
I thought about trying to interrogate the $Host variable, or $Host.UI/$Host.UI.RawUI but the only pertinent differences I am spotting are:
in $Host.Name:
The Windows powershell.exe commandline has $Host.Name = 'ConsoleHost'
ISE has $Host.Name = 'Windows PowerShell ISE Host'
SQL Server Agent job steps have $Host.Name = 'Default Host'
I have none of the non-Windows versions installed, but I expect they are different
in $Host.UI.RawUI:
The Windows powershell.exe commandline returns values for all properties of $Host.UI.RawUI
ISE returns no value (or $null) for some properties of $Host.UI.RawUI, e.g. $Host.UI.RawUI.CursorSize
SQL Server Agent job steps return no values for all of $Host.UI.RawUI
Again, I can't check in any of the other platforms
Maintaining a list of $Host.Name values that support Write-Host seems like it would be bit of a burden, especially with PowerShell being cross-platform now. I would reasonably want the script to be able to be called from any host and just do the right thing.
Background
I have written a script that can be reasonably run from within the PowerShell command prompt, from within the ISE or from within a SQL Server Agent job. The output of this script is entirely textual, for human reading. When run from the command prompt or ISE, the output is colorized using Write-Host.
SQL Server jobs can be set up in two different ways, and both support capturing the output into the SQL Server Agent log viewer:
via a CmdExec step, which is simple command-line execution, where the Job Step command text is an executable and its arguments, so you invoke the powershell.exe executable. Captured output is the stdout/sterr of the process:
powershell.exe -Command x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
via a PowerShell step, where the Job Step command text is raw PS script interpreted by its own embedded PowerShell host implementation. Captured output is whatever is written via Write-Output or Write-Error:
#whatever
Do-WhateverPowershellCommandYouWant
x:\pathto\script.ps1 -Arg1 -Arg2 -Etc
Due to some other foibles of the SQL Server host implementation, I find that you can emit output using either Write-Output or Write-Error, but not both. If the job step fails (i.e. if you throw or Write-Error 'foo' -EA 'Stop'), you only get the error stream in the log and, if it succeeds, you only get the output stream in the log.
Additionally, the embedded PS implementation does not support Write-Host. Up to at least SQL Server 2016, Write-Host throws a System.Management.Automation.Host.HostException with the message A command that prompts the user failed because the host program or the command type does not support user interaction.
To support all of my use-cases, so far, I took to using a custom function Write-Message which was essentially set up like (simplified):
$script:can_write_host = $true
$script:has_errors = $false
$script:message_stream = New-Object Text.StringBuilder
function Write-Message {
Param($message, [Switch]$iserror)
if ($script:can_write_host) {
$private:color = if ($iserror) { 'Red' } else { 'White' }
try { Write-Host $message -ForegroundColor $private:color }
catch [Management.Automation.Host.HostException] { $script:can_write_host = $false }
}
if (-not $script:can_write_host) {
$script:message_stream.AppendLine($message) | Out-Null
}
if ($iserror) { $script:has_errors = $true }
}
try {
<# MAIN SCRIPT BODY RUNS HERE #>
}
catch {
Write-Message -Message ("Unhandled error: " + ($_ | Format-List | Out-String)) -IsError
}
finally {
if (-not $script:can_write_host) {
if ($script:has_errors) { Write-Error ($script:message_stream.ToString()) -EA 'Stop' }
else { Write-Output ($script:message_stream.ToString()) }
}
}
As of SQL Server 2019 (perhaps earlier), it appears Write-Host no longer throws an exception in the embedded SQL Server Agent PS host, but is instead a no-op that emits nothing to either output or error streams. Since there is no exception, my script's Write-Message function can no longer reliably detect whether it should use Write-Host or StringBuilder.AppendLine.
The basic workaround for SQL Server Agent jobs is to use the more-mature CmdExec step type (where Write-Output and Write-Host both get captured as stdout), but I do prefer the PowerShell step type for (among other reasons) its ability to split the command reliably across multiple lines, so I am keen to see if there is a more-holistic, PowerShell-based approach to solve the problem of whether Write-Host does anything useful for the host I am in.
Just check if your host is UserInteractive or an service type environment.
$script:can_write_host = [Environment]::UserInteractive
Another way to track the output of a script in real time is to push that output to a log file and then monitor it in real time using trace32. This is just a workaround, but it might work out for you.
Add-Content -Path "C:\Users\username\Documents\PS_log.log" -Value $variablewithvalue
Related to Terminate part of powershell script and continue.
Partially related to Powershell Job Always Shows Complete.
My script runs locally and access the registry hive of a remote PC. I need the value of registry keys to be written into a $RegHive variable. And I want to monitor it as a job in case some PC freezes, I can terminate the command and move on to another PC.
My original code would be:
$global:RegHive = $null
$job = Start-Job -ScriptBlock {
$RegHive = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey("SomeKeyName", "SomePCName")
}
But no matter what I do, the variable $RegHive is empty.
If I do $RegHive = (Get-Job | Receive-Job) some value gets assigned to $RegHive that on one side looks exactly as if I would run it normally without a job/scriptblock, ie:
$RegHive = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey("SomeKeyName", "SomePCName")
and even has the same $RegHive.SubKeyCount
But the "normal" one has $RegHive.GetSubKeyName() method and the one from job doesn't.
How do I escape assigning a variable with Receive-Job and do the assignment directly inside the scriptblock, which is run as a job?
In simple words:
$job = Start-Job -ScriptBlock {$a = 1 + 2}
How to get $a be equal to 3 without $a = (Get-job | Receive-job)?
This might be helpful for you. The job is sort of like a variable
What you can do is name the job and then call it by name with -Keep to maintain it's value stored - aka it will store all final output inside itself until you call it. (it can be kept but the default is to remove it once called)
$global:RegHive = $null
Start-Job -Name "RegHive" -ScriptBlock {
[Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey("SomeKeyName", "SomePCName")
}
Receive-Job -Name "RegHive" -Keep
obviously calling the Receive-Job immediately after defeats the purpose of jobs, they add a lot of overhead, and are only efficient when needing to do multiple things at once. - if you call for 100s or thousands at once, you could do get-job | wait-job then when finished start using their outputs ---- wait-job also accepts job names or can wait on your entire list of jobs.
another option to set the variable is
$RegHive = "Receive-Job -Name "RegHive"
and finally, you can do this to use the value
get-<insert command> -value "$(Receive-Job -Name 'RegHive' -Keep)" -argument2 "YADA YADA"
remember keep will not delete the value and can be "Received" again later.
I am trying out something which is quite simple, yet I can't find an answer or rather can't figure out how to ask the question on Google.
So I thought it would be better to just show what I'm doing with pictures, here.
Here is the script I'm working on:
What it does is simple: get all virtual machines depending on their state (running, saved or off) and then starting them or stopping them. That is where I am having trouble.
I tried to pipe it with different commands, but it keeps giving an error which is
The input object cannot be bound to any parameters for the command either
because the command does not take pipeline input or the input and its properties do not match any of the parameters that take pipeline input.
So what I want is if the machine are running then save them. Is there a way to do so?
Use a ForEach-Object loop and a switch statement:
Get-VM -VMName $name | ForEach-Object {
switch ($_.State) {
'running' {
# do some
}
'saved' {
# do other
}
'off' {
# do something else
}
default {
throw ('Unrecognized state: {0}' -f $_.State)
}
}
}
I think the actual issue here (shown by the error message) is that start-vm doesn't accept pipeline input. I'm guessing this is the Hyper-V Start-VM cmdlet, by the way.
You could do this to get around the lack of pipeline-aware parameters:
Get-VM -VMName $name | where {$_.State -eq $state} | foreach-object {Start-VM -VM $_}
I'm trying to create a script that can export a user's mailbox to a PST, remotely (Exchange Server 2010 console is installed on the server we're running this from, and the module is loaded correctly). It's being done using a script so our L2 admins do not have to manually perform the task. Here's the MWE.
$UserID = Read-Host "Enter username"
$PstDestination = "\\ExServer\Share\$UserID.pst"
$Date = Get-Date -Format "yyyyMMddhhmmss"
$ExportName = "$UserID" + "$Date"
try {
New-MailboxExportRequest -Mailbox $UserID -FilePath $PstDestination -Name $ExportName -ErrorAction Stop -WarningAction SilentlyContinue | Out-Null
# Loop through the process to track its status and write progress
do {
$Percentage = (Get-MailboxExportRequest -Name $ExportName | Get-MailboxExportRequestStatistics).PercentComplete
Write-Progress "Mailbox export is in progress." -Status "Export $Percentage% complete" -PercentComplete "$Percentage"
}
while ($Percentage -ne 100)
Write-Output "$UserID`'s mailbox has been successfully exported. The archive can be found at $PstDestination."
}
catch {
Write-Output "There was an error exporting the mailbox. The process was aborted."
}
The problem is, as soon as we initiate the export, the task gets Queued. Sometimes, the export remains queued for a very long time, and the script is currently unable to figure out when the task begins, and when it does, is unable to display the progress correctly. The export happens in the background, but the script remains stuck there. So anything after the export, does not get executed, and the whole thing then has to be done manually.
Please suggest a way to handle this?
I tried adding a wait timer and then a check to see if the export has begun. It didn't quite work as expected.
Two things. First one is more about performance/hammering Exchange with unnesacary requests in do/while loop. Start-Sleep -Seconds 1 (or any other delay that makes sense depending on the mailbox size(s)) inside the loop is a must.
Second: rather than wait for job to start, just resume it yourself:
if ($request.Status -eq 'Queued') {
$request | Resume-MailboxExportRequest
}