How can I update the current script before running it? - powershell

We have some PowerShell scripts living on network share A, and the latest versions of those scripts living on read-only network share B. The read-only share is the output of our build system, and it has the build number in the path. Partly out of habit, partly because the scripts must create files on disk, but mostly because the paths are predictable, we copy PowerShell scripts to network share A before executing them. The trouble is that we don't always copy the scripts to network share A, so occasionally those scripts are out of date.
I'm trying to create a script that will update the PowerShell scripts on network share A (by copying the latest versions from share B), and then execute those latest scripts. Right now, my idea is to have a tiny script that grabs the latest script from share B, copies it to share A, and then executes that script on share A.
Is it possible to have a script update itself? I.e., instead of having two scripts, can I have one script (that lives on share A) that copies a newer version itself from share B to share A, then restarts execution of itself? (I would put in some logic about file-creation date so that on first execution, it would update itself, and on second execution it would run the actual meat of the script.)

Yes, you can update the script you're running, then execute it again. (Just make sure to exit the first script after updating.) Here's some sample code I created:
Write-Host "Starting script"
if ($(Get-Item G:\selfUpdater2.ps1).CreationTimeUtc -gt $(Get-Item G:\selfUpdater.ps1).CreationTimeUtc) {
Copy-Item G:\selfUpdater2.ps1 G:\selfUpdater.ps1
$(Get-Item G:\selfUpdater.ps1).CreationTimeUtc = [DateTime]::UtcNow
&G:\selfUpdater.ps1
exit
}
Write-Host "Continuing original script; will not get here if we updated."
Note that, if you have parameters to pass around, you'll have to pass them to the target script. Since your updated script may well have more or fewer parameters than your current script (some bound, some unbound by the current script), you'll need to iterate through both $script:MyInvocation.BoundParameters and $script:MyInvocation.UnboundArguments to pick up all of them and pass them on.
(Personally, I've had more luck with random-parameter-passing using Invoke-Expression ".\scriptName.ps1 $stringOfArguments" than with &.\scriptName.ps1 $arguments, but your mileage may vary - or you may know more PowerShell than I do. If you use Invoke-Expression, then be sure to re-add quotes around any parameters that have spaces in them.)
There's one drawback: If a mandatory script parameter is removed in a future version of the script, then you need to run the script at least once with the no-longer-mandatory parameter before it will update itself allow you to drop the parameter.

Here's a function I put together. Pass it the path of the file that might hold a newer release. This will update itself and then re-run with any arguments handed to he original script.
function Update-Myself
{
[CmdletBinding()]
param
(
[Parameter(Mandatory = $true,
Position = 0)]
[string]$SourcePath
)
#check that the destination file exists
if (Test-Path $SourcePath)
{
#The path of THIS script
$CurrentScript = $MyInvocation.ScriptName
if (!($SourcePath -eq $CurrentScript ))
{
if ($(Get-Item $SourcePath).LastWriteTimeUtc -gt $(Get-Item $CurrentScript ).LastWriteTimeUtc)
{
write-host "Updating..."
Copy-Item $SourcePath $CurrentScript
#If the script was updated, run it with orginal parameters
&$CurrentScript $script:args
exit
}
}
}
write-host "No update required"
}
Update-Myself \\path\to\newest\release\of\file.ps1

Related

Add environment variables with PowerShell and bat files

As part of a project, I need to run two bat files with a PowerShell script. These bat files will perform several operations including the creation of environment variables.
But problem. The first environment variables are created (those created with the first bat file) but not the second ones (which must be created with the execution of the second bat file). The execution of the second one went "well" because the rest of the operations it has to perform are well done.
I use the same function for the execution of these .bat files.
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
$argList = "/c '"$batFile'""
$process = Start-Process "cmd.exe" -ArgumentList $argList -Wait -PassThru -Verb runAd
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
I use the line
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
to reload the environment variables. But that didn't solve the problem. Without this line I have the environment variables corresponding to the second bat file (the one running second) being created. With it, I have only those of the first one.
I also noticed that if I re-run my PowerShell program (so re-run the batch files), I had all the environment variables created.
Well 1st off:
It looks like you have an error in your code for executing your 2nd batch file so I suspect you re-wrote your code to go here and it's not at all in the original form, as how it's written you would never get anything further.
You know what TL;DR: Try this
I've been writing a lot, and its a rabbit hole considering the snippet of code isn't enough of the process, the code is obviously re-written as it introduces a clearly different bug, and your description leaves something to be desired.
i'll leave some of the other points below re-ordered, and you can feel free to read/ignore whatever.
But here, is the long and short of it.
if you need to run this CMD scripts, and get some stuff out of them to ad to path, have them run normally and echo the path they create into stdout, then capture it in a powershell variable, dedupe it in powershell and set the path directly for your existing powershell environment.
Amend both of your CMD Scripts AKA Batch Files to add this to the very top before any existing lines.
#(SETLOCAL
ECHO OFF )
CALL :ORIGINAL_SCRIPT_STARTS_HERE >NUL 2>NUL
ECHO=%PATH%
( ENDLOCAL
EXIT /B )
:ORIGINAL_SCRIPT_STARTS_HERE
REM All your original script lines should be present below this line!!
PowerShell code basically will be
$batfile_1 = "C:\Admin\SE\Batfile_1.cmd"
$batfile_2 = "C:\Admin\SE\Batfile_2.cmd"
$Path_New = $($env:path) -split ";" | select -unique
$Path_New += $(&$batFile_1) -split ";" | ? {$_ -notin $Path_New}
$Path_New += $(&$batFile_2) -split ";" | ? {$_ -notin $Path_New}
$env:path = $($Path_New | select -unique) -join ";"
Although if you don't need the separate testable steps you could make it more concise as:
$batfile_1 = "C:\Admin\SE\Batfile_1.cmd"
$batfile_2 = "C:\Admin\SE\Batfile_2.cmd"
$env:path = $(
$( $env:path
&$batFile_1
&$batFile_2
) -split ";" | select -unique
) -join ";"
Okay leaving the mostly done stuff where I quit-off trying to amend my points as I followed the rabbit hole tracking pieces around, as it will give some light on some aspects here
2nd off: You do not need to start-process to run a CMD script, you can run a cmd script natively it will automatically instantiate a cmd instance.
Start-Process will spawn it as a separate process sure, but you wait for it, and although you use -PassThru and are saving that as a variable, you don't do anything with it to try to check it's status or error code so you may as well just run the CMD script directly and see it's StdOut in your powershell window, or save it in a variable to log it if needed.
3rd off: Why not just set the environment variables directly using Powershell?
I'm guessing these scripts do other stuff but might be that you should just echo what they want to set Path to back to the PowerShell script and then dedupe it and set the path when done.
4th off: $env:Path is your current environment's path, this includes all pathtext that is from the System AND the currentuser profile (HKLM: and HKCU: registry keys for environment), while $( [System.Environment]::GetEnvironmentVariable("Path","Machine") ) is your System (Local machine) pat has derived from your registry.
5th off: The operational Environment is specific to each shell, when you start a new instance of CMD /c it inherits the environment of the previous cmd instance that spawned it.
6th off: Changes made to environmental variables do not 'persist' ie: you can't open a new CMD / Powershell instance and see them, and once you close that original cmd window they're gone, unless you edit the registry values of these items directoy or use SET X in a cmd session (Which is problematic and should be avoided!) and also ONLY affects the USER variables not the system/local machine variables.
Thus, any changes made to the environment in one CMD instance only operate within that existing shell unless they are changes that persist in the registry, in which case they will only affect new shells that are launched.
7th off: When you launch powershell, it is a cmd shell running powershell, and so powershell inherits whatever the local machine and current user's variables are at that moment when the interpreter is started. This will be what is in $env:xxx
8th off: Setting $env:Path = $([System.Environment]::GetEnvironmentVariable("Path","Machine")) will always change the current powershell environment to whatever is stored in the [HKLM:\] registry key for environmental variables.
Now given that we don't have all of your code and only a description of what is happening.
It appears you have one script Lets call it batFile_1.cmd that is setting some variables in your
For each batch file you run whether spawned implicitly or explicitly you will inherit the previous shell's command environment.
However
Each instance of CMD which you spawn with your batch files within them, spawns a separate cmd shell instance and he Instance of CMD that powershell.exe is running inside of, and thus your script was running in.
Now I'm just supposing what is happening since you only give a small snippet, which is not enough to actually reproduce your real issue.
But it seems like you spawn a cmd script,
So it's hard to know exactly is happening without the full context instead of the snippet, although I'll go into one scenario that might be happening below.
A note on Each CMD instance only inherits the values of it's parent.
(I feel this is much clearer to explain in opening an actual CMD window and test how the shell works by spawning another instance of CMD.
When the new instance is exited the variables revert to the previous instance because they only move from parent to child)
eg:
C:\WINDOWS\system32>set "a=hello"
C:\WINDOWS\system32>echo=%a%
hello
C:\WINDOWS\system32>CMD /V
Microsoft Windows [Version 10.0.18362.1082]
(c) 2019 Microsoft Corporation. All rights reserved.
C:\WINDOWS\system32>(SET "a= there!"
More? SET "b=%a%!a!"
More? )
C:\WINDOWS\system32>echo=%b%
hello there!
C:\WINDOWS\system32>echo=%a%
there!
C:\WINDOWS\system32>exit
C:\WINDOWS\system32>echo=%a%
hello
C:\WINDOWS\system32>echo=%b%
%b%
C:\WINDOWS\system32>
But that isn't really what is happening here you seem to be updating the command environment back to the Local machine

PowerShell Self-Updating Script

We have a PowerShell script to continually monitor a folder for new JSON files and upload them to Azure. We have this script saved on a shared folder so that multiple people can run this script simultaneously for redundancy. Each person's computer has a scheduled task to run it at login so that the script is always running.
I wanted to update the script, but then I would have had to ask each person to stop their running script and restart it. This is especially troublesome since we eventually want to run this script in "hidden" mode so that no one accidentally closes out the window.
So I wondered if I could create a script that updates itself automatically. I came up with the code below and when this script is run and a new version of the script is saved, I expected the running PowerShell window to to close when it hit the Exit command and then reopen a new window to run the new version of the script. However, that didn't happen.
It continues along without a blip. It doesn't close the current window and it even keeps the output from old versions of the script on the screen. It's as if PowerShell doesn't really Exit, it just figures out what's happening and keeps going on with the new version of the script. I'm wondering why this is happening? I like it, I just don't understand it.
#Place at top of script
$lastWriteTimeOfThisScriptWhenItFirstStarted = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
#Continuous loop to keep this script running
While($true) {
Start-Sleep 3 #seconds
#Run this script, change the text below, and save this script
#and the PowerShell window stays open and starts running the new version without a hitch
"Hi"
$lastWriteTimeOfThisScriptNow = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
if($lastWriteTimeOfThisScriptWhenItFirstStarted -ne $lastWriteTimeOfThisScriptNow) {
. $PSCommandPath
Exit
}
}
Interesting Side Note
I decided to see what would happen if my computer lost connection to the shared folder where the script was running from. It continues to run, but presents an error message every 3 seconds as expected. But, it will often revert back to an older version of the script when the network connection is restored.
So if I change "Hi" to "Hello" in the script and save it, "Hello" starts appearing as expected. If I unplug my network cable for a while, I soon get error messages as expected. But then when I plug the cable back in, the script will often start outputting "Hi" again even though the newly saved version has "Hello" in it. I guess this is a negative side-effect of the fact that the script never truly exits when it hits the Exit command.
. $PSCommand is a blocking (synchronous) call, which means that Exit on the next line isn't executed until $PSCommand has itself exited.
Given that $PSCommand here is your script, which never exits (even though it seemingly does), the Exit statement is never reached (assuming that the new version of the script keeps the same fundamental while loop logic).
While this approach works in principle, there are caveats:
You're using ., the "dot-sourcing" operator, which means the script's new content is loaded into the current scope (and generally you always remain in the same process, as you always do when you invoke a *.ps1 file, whether with . or (the implied) regular call operator, &).
While variables / functions / aliases from the new script then replace the old ones in the current scope, old definitions that you've since removed from the new version of the script would linger and potentially cause unwanted side-effects.
As you observe yourself, your self-updating mechanism will break if the new script contains a syntax error that causes it to exit, because the Exit statement then is reached, and nothing is left running.
That said, you could use that as a mechanism to detect failure to invoke the new version:
Use try { . $ProfilePath } catch { Write-Error $_ } instead of just . $ProfilePath
and instead of the Exit command, issue a warning (or do whatever is appropriate to alert someone of the failure) and then keep looping (continue), which means the old script stays in effect until a valid new one is found.
Even with the above, the fundamental constraint of this approach is that you may exceed the maximum call-recursion depth. The nested . invocations pile up, and when the nesting limit is reached, you won't
be able to perform another, and you're stuck in a loop of futile retries.
That said, as of Windows PowerShell v5.1 this limit appears to be around 4900 nested calls, so if you never expect the script to be updated that frequently while a given user session is active (a reboot / logoff would start over), this may not be a concern.
Alternative approach:
A more robust approach would be to create a separate watchdog script whose sole purpose is to monitor for new versions, kill the old running script and start the new one, with an alert mechanism for when starting the new script fails.
Another option is to have the main script have "stages" where it runs command based on the name of the highest revision script in a folder. I think mklement0's watchdog is a genious idea though.
But what I'm referring to is doing what you do but use variables as your command and those variables get updated with the highest number script name. This way you just drop 10.ps1 into the folder and it will ignore 9.ps1. And the function in that script would be named mainfunction10 etc...
Something like
$command = ((get-childitem c:\path\to\scriptfolder\).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
The files would have to be named alphabetically from oldest to newest. Otherwise you'll have to sort-object by date.
$command = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
Or . Source instead of using it as a command. And then have the later code call the functions like function$command and the function would be the name of the script
I still like the watch dog idea more.
The watchdog would look sort of like
While ($true) {
$new = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).fullname)[-1]
If ($old -ne $new){
Kill $old
Sleep 10
& $new
}
$old -eq $new
Sleep 600
}
Mind you I'm not certain how the scripts are ran and you may need to seek instances of powershell based on the command used to start it.
$kill = ((WMIC path win32_process get Caption,Processid,Commandline).where({$_.commandline -contains $command})).processid
Kill $kill
Would replace kill $old
This command is an educated guess and untested.
Other tricks would be running the main script from the watchdog as a job. Getting the job Id. And then checking for file changes. If the new file comes in, the watch dog could kill the job Id and repeating the whole process
You could also just have the script end. And have a windows job every 10 mins just rerun the script. And that way you just have whatever script just run every ten minutes. This is more intense per startup though.
Instead of exit you could use break to kill the loop. And the script will exit naturally
You can use test-connection to check for the server. But if it's every 3 seconds. That's a lot if pings from a lot of computers

Powershell - Check for updates to script prior to running

I was wondering if anyone knew of a way to have a powershell script check for updates to itself prior to running.
I have a script that I am going to be dispatching out to multiple computers, and don't want to have to redeploy it to each computer each time a change in the script is made. I would like to have it check a certain location to see if there is a newer version of itself (and update itself if needed).
I can't seem to think of a way to do it. Please let me know if anyone can help out. Thanks.
Well, one way maybe to create a simple batch file that runs your actual script and the first line in that batch file may be to check for existence of a ps1 in your update folder. If there is one, it can copy it down first, and then start your powershell script
Eg. whenever there is an update, you put your 'Mypowershellscript.ps1' script in c:\temp\update\ folder
and let's assume your script will be running from
c:\temp\myscriptfolder\
then you can create batch file like this
if NOT exist C:\temp\update\mypowershelscript.ps1 goto :end
copy /Y c:\temp\update\MyPowerShellScript.ps1 c:\temp\MyScriptFolder\
:END
%systemroot%\System32\WindowsPowerShell\v1.0\powershell.exe -nologo -noprofile -file "c:\temp\myscriptfolder\mypowershellscript.ps1"
Here's a function I put together. Pass it the path of the file that might hold a newer release. This will update itself and then re-run with any arguments handed to the original script. Do this early in the process, other function results will be lost. I typically check the network is up and I can see the share holding the newer file, then run this:
function Update-Myself
{
[CmdletBinding()]
param
(
[Parameter(Mandatory = $true,
Position = 0)]
[string]$SourcePath
)
#Check that the file we're comparing against exists
if (Test-Path $SourcePath)
{
#The path of THIS script
$CurrentScript = $MyInvocation.ScriptName
if (!($SourcePath -eq $CurrentScript ))
{
if ($(Get-Item $SourcePath).LastWriteTimeUtc -gt $(Get-Item $CurrentScript ).LastWriteTimeUtc)
{
write-host "Updating..."
Copy-Item $SourcePath $CurrentScript
#If the script was updated, run it with orginal parameters
&$CurrentScript $script:args
exit
}
}
}
write-host "No update required"
}
Update-Myself "\\path\to\newest\release\of\file.ps1"

Call a PowerShell script in a new, clean PowerShell instance (from within another script)

I have many scripts. After making changes, I like to run them all to see if I broke anything. I wrote a script to loop through each, running it on fresh data.
Inside my loop I'm currently running powershell.exe -command <path to script>. I don't know if that's the best way to do this, or if the two instances are totally separate from each other.
What's the preferred way to run a script in a clean instance of PowerShell? Or should I be saying "session"?
Using powershell.exe seems to be a good approach but with its pros and cons, of course.
Pros:
Each script is invoked in a separate clean session.
Even crashes do not stop the whole testing process.
Cons:
Invoking powershell.exe is somewhat slow.
Testing depends on exit codes but 0 does not always mean success.
None of the cons is mentioned is a question as a potential problem.
The demo script is below. It has been tested with PS v2 and v3. Script names
may include special characters like spaces, apostrophes, brackets, backticks,
dollars. One mentioned in comments requirement is ability to get script paths
in their code. With the proposed approach scripts can get their own path as
$MyInvocation.MyCommand.Path
# make a script list, use the full paths or explicit relative paths
$scripts = #(
'.\test1.ps1' # good name
'.\test 2.ps1' # with a space
".\test '3'.ps1" # with apostrophes
".\test [4].ps1" # with brackets
'.\test `5`.ps1' # with backticks
'.\test $6.ps1' # with a dollar
'.\test ''3'' [4] `5` $6.ps1' # all specials
)
# process each script in the list
foreach($script in $scripts) {
# make a command; mind &, ' around the path, and escaping '
$command = "& '" + $script.Replace("'", "''") + "'"
# invoke the command, i.e. the script in a separate process
powershell.exe -command $command
# check for the exit code (assuming 0 is for success)
if ($LastExitCode) {
# in this demo just write a warning
Write-Warning "Script $script failed."
}
else {
Write-Host "Script $script succeeded."
}
}
If you're on PowerShell 2.0 or higher, you can use jobs to do this. Each job runs in a separate PowerShell process e.g.:
$scripts = ".\script1.ps1", ".\script2.ps1"
$jobs = #()
foreach ($script in $scripts)
{
$jobs += Start-Job -FilePath $script
}
Wait-Job $jobs
foreach ($job in $jobs)
{
"*" * 60
"Status of '$($job.Command)' is $($job.State)"
"Script output:"
Receive-Job $job
}
Also, check out the PowerShell Community Extensions. It has a Test-Script command that can detect syntax errors in a script file. Of course, it won't catch runtime errors.
One tip for PowerShell V3 users: we (the PowerShell team) added a new API on the Runspace class called ResetRunspace(). This API resets the global variable table back to the initial state for that runspace (as well as cleaning up a few other things). What it doesn't do is clean out function definitions, types and format files or unload modules. This allows the API to be much faster. Also note that the Runspace has to have been created using an InitialSessionState object, not a RunspaceConfiguration instance. ResetRunspace() was added as part of the Workflow feature in V3 to support parallel execution efficiently in a script.
The two instances are totally separate, because they are two different processes. Generally, it is not the most efficient way to start a Powershell process for every script run. Depending on the number of scripts and how often you re-run them, it may be affecting your overall performance. If it's not, I would leave everything AS IS.
Another option would be to run in the same runspace (this is a correct word for it), but clean everything up every time. See this answer for a way to do it. Or use below extract:
$sysvars = get-variable | select -Expand name
function remove-uservars {
get-variable |
where {$sysvars -notcontains $_.name} |
remove-variable
}

Parallelizing powershell script execution

I am having 8 powershell scripts. Few of them having dependencies. It means they can't be executed in parallel. They should be executed on after another.
Some of the Powershell scripts has no dependency and it can be executed in parallel.
Following is the dependency explained in detail
Powershell scripts 1, 2, and 3 depend on nothing else
Powershell script 4 depends on Powershell script 1
Powershell script 5 depends on Powershell scripts 1, 2, and 3
Powershell script 6 depends on Powershell scripts 3 and 4
Powershell script 7 depends on Powershell scripts 5 and 6
Powershell script 8 depends on Powershell script 5
I knew that by manually hard coding the dependency is possible. But 10 more powershell scripting may be added and dependency among them may added.
Has any one acheived parallelism by finding dependency? If so please share me how to proceed.
You need to look at PowerShell 3.0 Workflows. It offers the features you need for your requirement. Something like this:
workflow Install-myApp {
param ([string[]]$computername)
foreach -parallel($computer in $computername) {
"Installing MyApp on $computer"
#Code for invoking installer here
#This can take as long as 30mins and may reboot a couple of times
}
}
workflow Install-MyApp2{
param ([string[]]$computername)
foreach -parallel($computer in $computername) {
"Installing MyApp2 on $computer"
#Code for invoking installer here
#This can take as long as 30mins!
}
}
WorkFlow New-SPFarm {
Sequence {
Parallel {
Install-MyApp2 -computername "Server2","Server3"
Install-MyApp -computername "Server1","Server4","Server5"
}
Sequence {
#This activity can happen only after the set of activities in the above parallel block are complete"
"Configuring First Server in the Farm [Server1]"
#The following foreach should take place only after the above activity is complete and that is why we have it in a sequence
foreach -parallel($computer in $computername) {
"Configuring SharePoint on $computer"
}
}
}
}
How familiar with parallel programming in general are you? Have you heard of and used the concept of mutual exclusion? The concept in general is to use some kind of messaging/locking mechanism to protect a shared resource among different parallel threads.
In your case, you're making the dividing lines be the scripts themselves - which I think may make this much simpler than most of the techniques outlined in that wikipedia article. Would this simple template work for what you're looking for?
Define a folder in the local file system. This location will be known to all scripts (default parameter).
Before running any of the scripts, make sure any files in that directory are deleted.
For each script, as the very last step of their execution, they should write a file in the shared directory with their script name as the name of the file. So script1.ps1 would create script1 file, for example.
Any script that has a dependency on another script will define these dependencies in terms of the file names of the scripts. If script3 is dependent on script1 and script2, this will be defined as a dependency parameter in script3.
All scripts with dependencies will run a function that checks if the files exist for the scripts it's dependent on. If they are, it proceeds with the execution of the script, otherwise it pauses until they are complete.
All scripts get kicked off simultaneously by a master script / batch file. All of the scripts are ran as PowerShell jobs so that the OS will run their execution in parallel. Most of the scripts will start up, see they have dependencies, and then wait patiently for these to get resolved before continuing with the actual execution of the script body.
The good news is that this would allow for flexible changing of dependencies. Every script writes a file, making no assumption about whether someone else is waiting for them or not. Changing the dependency of a particular script would be a simple one-line change or change of input parameter.
This is definitely not a perfect solution though. For instance what would happen if a script fails (or your script can exit in multiple different code paths but you forget to write the file in one of them)? This could cause a deadlock situation where no dependent scripts will get kicked off. The other bad thing is the busy wait of sleeping or spinning while waiting for the right files to get created - this could be corrected by implementing an Event-based approach where you have the OS watch the directory for changed.
Hope this helps and isn't all garbage.
You'll just have to order you calls appropriately. There's nothing built-in that will handle the dependencies for you.
Run 1,2,3 at the same time Start-Job.
Wait for them to get done Get-Job -State Running | Wait-Job
Run 4,5 at the same time Start-Job
Wait for them to get done Get-Job -State Running | Wait-Job
Run 6 and wait for it.
Run 7, 8 at the same time Start-Job