Powershell Delete Locked File But Keep In Memory - powershell

Until recently, we've been deploying .exe applications by simply copying them manually to the destination folder on the server. Often though, the file was already running at the time of deployment (the file is called from a SQL Server job)--sometimes even multiple instances. We don't want to kill the process while it's running. We also can't wait for it to finish because it keeps on being invoked, sometimes multiple times concurrently.
As a workaround, what we've done is a "cut and paste" via Windows Explorer on the .exe file into another folder. Apparently, what this does is it moves the file (effectively a delete) but keeps it in RAM so that the processes which are using it can continue without issues. Then we'd put the new files there which would get called when any later program would call it.
We've now moved to an automated deploy tool and we need an automated way of doing this.
Stop-Process -name SomeProcess
in PowerShell would kill the process, which I don't want to do.
Is there a way to do this?
(C# would also be OK.)
Thanks,

function moverunningprocess($process,$path)
{
if($path.substring($path.length-1,1) -eq "\") {$path=$path.substring(0,$path.length-1)}
$fullpath=$path+"\"+$process
$movetopath=$path + "--Backups\$(get-date -f MM-dd-yyyy_HH_mm_ss)"
$moveprocess=$false
$runningprocess=Get-WmiObject Win32_Process -Filter "name = '$process'" | select CommandLine
foreach ($tp in $runningprocess)
{
if ($tp.commandline -ne $null){
$p=$tp.commandline.replace('"','').trim()
if ($p -eq $fullpath) {$moveprocess=$true}
}
}
if ($moveprocess -eq $true)
{
New-Item -ItemType Directory -Force -Path $movetopath
Move-Item -path "$path\*.*" -destination "$movetopath\"
}
}
moverunningprocess "processname.exe" "D:\Programs\ServiceFolder"

Since you're utilizing a SQL Sever to call the EXE. Why do you add a table that contains the path to the latest version of the file and modify your code that fires the EXE. That way when a new version is rolled out, you can create a new folder, place the file in it, and update the table pointing to it. That will allow any still active threads to have access to the old version and any new threads will pickup up the new executable. You then can delete the old file after it's no longer needed.

Related

How to prevent multiple instances of the same PowerShell 7 script?

Context
On a build server, a PowerShell 7 script script.ps1 will be started and will be running in the background in the remote computer.
What I want
A safenet to ensure that at most 1 instance of the script.ps1 script is running at once on the build server or remote computer, at all times.
What I tried:
I tried meddling with PowerShell 7 background jobs (by executing the script.ps1 as a job inside a wrapper script wrapper.ps1), however that didn't solve the problem as jobs do not carry over (and can't be accessed) in other PowerShell sessions.
What I tried looks like this:
# inside wrapper.ps1
$running_jobs = $(Get-Job -State Running) | Where-Object {$_.Name -eq "ImportantJob"}
if ($running_jobs.count -eq 0) {
Start-Job .\script.ps1 -Name "ImportantJob" -ArgumentList #($some_variables)
} else {
Write-Warning "Could not start new job; Existing job detected must be terminated beforehand."
}
To reiterate, the problem with that is that $running_jobs only returns the jobs running in the current session, so this code only limits one job per session, allowing for multiple instances to be ran if multiple sessions were mistakenly opened.
What I also tried:
I tried to look into Get-CimInstance:
$processes = Get-CimInstance -ClassName Win32_Process | Where-Object {$_.Name -eq "pwsh.exe"}
While this does return the current running PowerShell instances, these elements carry no information on the script that is being executed, as shown after I run:
foreach ($p in $processes) {
$p | Format-List *
}
I'm therefore lost and I feel like I'm missing something.
I appreciate any help or suggestions.
I like to define a config path in the $env:ProgramData location using a CompanyName\ProjectName scheme so I can put "per system" configuration.
You could use a similar scheme with a defined location to store a lock file created when the script run and deleted at the end of it (as suggested already within the comments).
Then, it is up to you to add additional checks if needed (What happen if the script exit prematurely while the lock is still present ?)
Example
# Define default path (Not user specific)
$ConfigLocation = "$Env:ProgramData\CompanyName\ProjectName"
# Create path if it does not exist
New-Item -ItemType Directory -Path $ConfigLocation -EA 0 | Out-Null
$LockFilePath = "$ConfigLocation\Instance.Lock"
$Locked = $null -ne (Get-Item -Path $LockFilePath -EA 0)
if ($Locked) {Exit}
# Lock
New-Item -Path $LockFilePath
# Do stuff
# Remove lock
Remove-Item -Path $LockFilePath
Alternatively, on Windows, you could also use a scheduled task without a schedule and with the setting "If the task is already running, then the following rule applies: Do not start a new instance". From there, instead of calling the original script, you call a proxy script that just launch the scheduled task.

if then else not seeing else argument

I'm trying to learn myself some PowerShell scripting to automate some tasks at work.
The latest task I tried to automate was to create a copy of user files to a network-folder, so that users can easily relocate their files when swapping computers.
Problem is that my script automatically grabs the first option in the whole shebang, it never picks the "else"-option.
I'll walk you through part of the script. (I translated some words to make it easier to read)
#the script asks whether you want to create a copy, or put a copy back
$question1 = Read-Host "What would you like to do with your backup? make/put back"
if ($question1 -match 'put back')
{Write-Host ''
Write-Host 'Checking for backup'
Write-Host ''
#check for existing backup
if (-Not(Test-Path -Literalpath "G:\backupfolder"))
{Write-Host "no backup has been found"}
Elseif (Test-Path -LiteralPath "G:\backupfolder")
{Write-Host "a backup has been found."
Copy-Item -Path "G:\backupfolder\pictures\" -Destination "C:\Users\$env:USERNAME\ ....}}
Above you see the part where a user would want the user to put a "backup" back.
It checks if a "backup" exists on the G-drive. If the script doesn't see a backup-folder it says so. If the script DOES see the backup it should copy the content from the folders on the G-drive to the similarly named folder you'd find on the user-profile-folder. Problem is: So far it only acts as if there is never a G:\backupfolder to be found. It seems that I'm doing something wrong with if/then/else.
I tried with if-->Else, and with if-->Elseif, but neither works.
I also thought that it could be the Test-Path, so I tried adding -LiteralPath, but to no avail.
There is more to the script but it's just more if/then/else. If I can get it to work on this part I should be able to get the rest working. What am I not seeing/doing wrong?

powercli remove empty folders from vcenter

i have been searching for quite some time and i cant seem to find anything close.
i am working on automating our VM for our DEV & QA dept using VCAC.
i have reached the point that during VM creation a folder with the project's name is created under the dept (for exaple DEV\Upgrade1)
the problem starts when the DEV guys decides to delete t the whole project and start over.
i am left with a lot of empty folders throughout the VC server and i was wondering if there is a powercli script i can run daily to check if there are any empty folders (with no vms) inside and delete them if they exist.
its a tricky issue because i found i can use remove-folder but only if i give its name which i dont know.
and i dont want to delete folders with VMS inside.
anyone can help me?
thanks
If your already connected to your server and in powercli run this.
$folders = get-folder
Foreach ($folder in $folders)
{
if((get-folder $folder|get-vm).count -eq 0)
{
remove-folder -folder $folder -confirm $false
}
}
Drop a -location $datacenter onto the first get-folder if you want to isolate.

Starting an exe file with parameters on a remote PC

We have a program running on about 400 PCs (All W7). This program is called Wisa.
We receive regular updates for this program, named something like wisa_update1.0.exe, wisa_update1.1.exe, wisa_update2.0.exe, etc. The users can not do the update themself due to account restrictions.
We manage to do the update once and distribute it with a copy-item to all PCs. Then with Enter-PSSession I can go to each PC and update the program with the following command:
wisa_update3.0 /verysilent
(with the argument /verysilent no questions are asked)
This is already a major gain in time, but I want to do the update more automatically.
I have a file "pc.txt" with all 400 PCs in it. I use this file already for the Copy-Item via Get-Content. Now I want to use this file to do the updates with the above command, but I can't find a good way to use a remote executable with a parameter in PowerShell.
What you want to do is load get-content -Path $PClist and then run your script actions in a foreach. You'll want to adapt this example to your own script:
$PClist = 'c:\pc.txt'
$aComputers = Get-Content -Path $PClist
foreach ($Computer in $aComputers)
{
code actions to perform
}
Also you can use multithreading and get it over with fraction of time (provided you have a good machine). The below mentioned link explains how to do it well.
http://www.get-blog.com/?p=22

Powershell running under a service hangs on *.zip CopyHere

I'm running a Windows Service (Hudson) which in turn spawns a PowerShell process to run my custom PowerShell commands. Part of my script is to unzip a file using CopyHere. When I run this script locally, I see a progress dialog pop up as the files are extracted and copied. However, when this runs under the service, it hangs at the point where a dialog would otherwise appear.
Here's the unzip portion of my script.
# Extract the contents of a zip file to a folder
function Extract-Zip {
param([string]$zipFilePath, [string]$destination)
if(test-path($zipFilePath)) {
$shellApplication = new-object -com shell.application
$zipFile = get-item $zipFilePath
$zipFolder = $shellApplication.NameSpace($zipFile.fullname)
$destinationFile = get-item $destination
$destinationFolder = $shellApplication.NameSpace($destinationFile.fullname)
$destinationFolder.CopyHere($zipFolder.Items())
}
}
I suspect that because its running under a service process which is headless (no interaction with the desktop), its somehow stuck trying to display a dialog.
Is there a way around this?
If it's still actual, I managed to fix this with having CopyHere params equal 1564.
So in my case extract zip function looks like:
function Expand-ZIPFile{
param(
$file, $destination
)
$shell = new-object -com shell.application
$zip = $shell.NameSpace($file)
foreach($item in $zip.items())
{
$shell.Namespace($destination).copyhere($item,1564)
"$($item.path) extracted"
}
1564 description can be found here - http://msdn.microsoft.com/en-us/library/windows/desktop/bb787866(v=vs.85).aspx:
(4) Do not display a progress dialog box.
(8) Give the file being operated on a new name in a move, copy, or rename operation if a file with the target name already exists.
(16) Respond with "Yes to All" for any dialog box that is displayed.
(512) Do not confirm the creation of a new directory if the operation requires one to be created.
(1024) Do not display a user interface if an error occurs.
If this is running on Vista or Windows 7, popping up UI from a service isn't going to be seen by the end user as you suspected. See this paper on Session 0 Isolation. However, does the progress dialog require user input? If not, I wouldn't think that would cause the service to hang. I would look for an option to disable the progress display. If you can't find that, then try switching to another ZIP extractor. PSCX 1.2 comes with an Expand-Archive cmdlet. I'm sure there are also others available.
Looking at the documentation for PowerShell, it looks like the -NonInteractive option may help here