Suppose I have a simple Job that starts off like this:
$job = Start-Job -ArgumentList 1, 2 -ScriptBlock {
$var1 = $args[0]
$var2 = $args[1]
$var3 = $var1 + $var2
Write-Host $var3
}
Now suppose that I want to continue executing the session that is $job and introduce new arguments and simply continue executing inside $job.
From what I understand, that's not how Jobs work in Powershell. Once the commands inside the job were executed, the job is considered to be completed and can no longer be recycled. If my understanding is correct, is there any other way to achieve the effect of effectively having a background task inside your powershell session that you can continue injecting with new commands/variables, without having to create a new job/session/process? For clarity, this is local (on the same machine).
Dennis' helpful answer provides the crucial pointer: use the PowerShell SDK to create an in-process PowerShell instance that you can use for repeated invocation of commands.
The following sample code demonstrates this: It keeps prompting you for a command to execute and uses a single, reusable PowerShell instance to execute it (press Ctrl-C to exit):
$ps = [powershell]::Create()
while ($true) {
$cmd = Read-Host 'Type a command to execute'
# Execute and output the results.
$ps.AddScript($cmd).Invoke()
# Relay errors, if any.
$ps.Streams.Error | Write-Error
# Reset in preparation for the next command.
$ps.Commands.Clear(); $ps.Streams.ClearStreams()
}
I think you would be better of looking into PowerShell runspaces instead that can communicate back and forth with each other since they are threads of the same processes.
Start-Job actually starts a new PowerShell session in a separate isolated process.
See, MS Docs - Start-Job -RunAs32 and MS Scripting - Beginning Use of PowerShell Runspaces
Related
We have received 20 jmeter test plans, each testing one endpoint, which we need to run. On tests we need to pass parameters and others we don't.
My idea was to create a powershell script that loops through the directories and runs a test, waits until finished and then runs the next test. When we develop a new endpoint we just create a new test plan and save it in the appropriate folder and the powershell script will include it next time we loop through tests.
I need the tests to finish before starting the next plan, so I'm looking at something like:
Write-Output "Running Test 1"
$proc = Start-Process -FilePath "C:\JmeterLoadTests\apache-jmeter-5.2.1\bin\jmeter" -ArgumentList "-n -t C:\JmeterLoadTests\test\enpointsType1\test-1-1.jmx -Jduration=10"
$proc.WaitForExit()
Write-Output "Proc 1 Done"
Write-Output "Running Proc 2"
$proc2 = Start-Process -FilePath "C:\JmeterLoadTests\apache-jmeter-5.2.1\bin\jmeter" -ArgumentList "-n -t C:\JmeterLoadTests\test\enpointsType1\test-1-1.jmx -Jduration=10"
$proc2.WaitForExit()
This just launches both tests simultaneously.
My question is then how to make Powershell wait for the previous test to finish.
Your immediate problem is that your Start-Process call is missing the -PassThru switch, which is required for the call to return a System.Diagnostics.Process instance representing the newly launched process.
# ...
# Note the use of -PassThru
$proc = Start-Process -PassThru -FilePath "C:\JmeterLoadTests\apache-jmeter-5.2.1\bin\jmeter" -ArgumentList "-n -t C:\JmeterLoadTests\test\enpointsType1\test-1-1.jmx -Jduration=10"
$proc.WaitForExit()
# ...
Alternatively, if you don't need to examine the process exit code (which $proc.ExitCode in the above command would give you), you can simply use the -Wait switch, which makes Start-Process itself wait for the process to terminate:
# ...
# Note the use of -Wait
Start-Process -Wait -FilePath "C:\JmeterLoadTests\apache-jmeter-5.2.1\bin\jmeter" -ArgumentList "-n -t C:\JmeterLoadTests\test\enpointsType1\test-1-1.jmx -Jduration=10"
# ...
Taking a step back:
To synchronously execute console applications or batch files in the current console window, call them directly, do not use Start-Process (or the System.Diagnostics.Process API it is based on).
Aside from being syntactically easier and less verbose, this has two key advantages:
You can directly capture their output.
It allows you to examine the process exit code via the automatic $LASTEXITCODE variable afterwards.
Assuming that jmeter is a console application (the docs suggests it runs as one when invoked with arguments):
# ...
# Direct invocation in the current window.
# Stdout and stderr output will print to the console by default,
# but can be captured or redirected.
# Note: &, the call operator, isn't strictly needed here,
# but would be if your executable path were quoted
# or contained variable references.
& C:\JmeterLoadTests\apache-jmeter-5.2.1\bin\jmeter -n -t C:\JmeterLoadTests\test\enpointsType1\test-1-1.jmx -Jduration=10
# Use $LASTEXITCODE to examine the process exit code.
# ...
See this answer for more information.
It might be the case you're suffering from Out-Default cmdlet execution, the easiest is separating the commands with the semicolon like:
cmd1;cmd2;cmd3;etc;
this way Powershell will wait for the previous command to complete before starting the next one
Demo:
It might be a better idea considering switching to Maven JMeter Plugin which by default executes all tests it finds under src/test/jmeter folder relative to the pom.xml file
I am running multiple PowerShell scripts at once. I would like to be able to wait on certain ones to finish before opening new scripts. Basically, I was thinking if I could find the command line option that ran it something like "powershell.exe -Path "<script dir>" that would do it.
I tried doing a Get-Process | gm to find any parameters that I could call to get that information and I didn't see any (doesn't mean they aren't there) I tried looking through Task Manager to see if I could view something through the gui that I could link to but that didn't help either.
I hope I can get something like
Start-Process -FilePath ".\<script>.ps1" -ArgumentList "<args>"
do
{
sleep 10
}
until ((Get-Process -ProcessName "PowerShell" | where "<paramater>" -EQ ".\<script>")
I need to wait until that process is done but I don't want to put a wait at the end of the Start-Process because after that Start-Process kicks off I need some other items to go to while my .\ is running. I just need it to wait before another section of script kicks off.
Have a look at the "Job" cmdlets https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_jobs?view=powershell-6
And the $PID automatic variable, this will give the process ID of the current PowerShell session.
In our company we use TFS 2017 (update 1) for building and releasing our products. The release part is made up of several steps which include the execution of some Powershell scripts.
This is how I configure the PS step.
What I noticed is that the output of the powershell scripts is not written realtime while it is executing, but all together in the end of the PS task. This is very annoying in case of long running scripts as we are not able to see the live progress of the task, but we have to wait the task to finish to see the results.
I wrote some simple PS scripts to debug this problem but neither using write-host (this does not write nothing at all, even in the end of the task) nor using write-output nor with write-verbose -verbose allows me to write realtime output.
This is one example script I tried, without success.
Write-Output "Begin a lengthy process..."
$i = 0
while ($i -le 100)
{
Start-Sleep 1
Write-Output "Inner code executed"
$i += 10
}
Write-Output "Completed."
Did you ever found yourself in this situation?
Regards
I can reproduce this issue, based on my test realtime output is not supported for the PowerShell on Target Machines task.
Write-output or write-verbose -verbose just can output to console but it's not real-timed, the output only displays once the powershell script completely executed.
To display the real-time output you can use the Utility:PowerShell task instead of the Deploy:PowerShell on Target Machines task.
So, as a workaround you can deploy an agent on the target machine which you want to run the powershell script, then trigger the release using that agent running powershell script with Utility:PowerShell task.
UPDATE:
Well, find another workaround with Utility:PowerShell task:
1.Set up WinRM for target computers, refer to WinRM configuration
2.Copy the target PS script to the target machine (D:\TestShare\PStest.ps1 in below sample)
3.Create a PowerShell script to call the Powershell.exe to run the target powershell script on target machine, see below sample:
Param(
[string]$computerName = "ICTFS2015.test.com",
)
$Username = "domain\usename"
$Password = ConvertTo-SecureString "Possword" -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential($Username,$password)
Invoke-Command -ComputerName $computerName -Credential $cred -ScriptBlock {Invoke-Expression -Command:"powershell.exe /c 'D:\TestShare\PStest.ps1'"}
4.Add a Utility:PowerShell task to run above PowerShell script. (You can check in or run Inline Script).
I have a powershell script I am spawning additional powershell instances to run cmdlets.
I need to be able to Start-Process , a powershell instance, from a nested foreach loop and allow the process to run through and then stop that process when its complete.
foreach($thing in $things)
foreach($stuff in $stuffs)
start-process powershell.exe -nonewwindow | get-cmdichooseasanewcmd | export-csv -path stuff
stop-process same instantiated process
How do I stop the process that is specifically tied to my foreach loop when it is complete with its task?
If you're not doing anything requiring credentials you can use jobs.
$job = Start-Job { # Do stuff }
$job.StopJob()
Anything involving credentials invokes the wrath of double hopping but I don't think it works the way you're running it in your question either so shouldn't be a problem.
Alternatively you can use runspaces but that's a big can of worms. See this link if you want to go there.
I have many scripts. After making changes, I like to run them all to see if I broke anything. I wrote a script to loop through each, running it on fresh data.
Inside my loop I'm currently running powershell.exe -command <path to script>. I don't know if that's the best way to do this, or if the two instances are totally separate from each other.
What's the preferred way to run a script in a clean instance of PowerShell? Or should I be saying "session"?
Using powershell.exe seems to be a good approach but with its pros and cons, of course.
Pros:
Each script is invoked in a separate clean session.
Even crashes do not stop the whole testing process.
Cons:
Invoking powershell.exe is somewhat slow.
Testing depends on exit codes but 0 does not always mean success.
None of the cons is mentioned is a question as a potential problem.
The demo script is below. It has been tested with PS v2 and v3. Script names
may include special characters like spaces, apostrophes, brackets, backticks,
dollars. One mentioned in comments requirement is ability to get script paths
in their code. With the proposed approach scripts can get their own path as
$MyInvocation.MyCommand.Path
# make a script list, use the full paths or explicit relative paths
$scripts = #(
'.\test1.ps1' # good name
'.\test 2.ps1' # with a space
".\test '3'.ps1" # with apostrophes
".\test [4].ps1" # with brackets
'.\test `5`.ps1' # with backticks
'.\test $6.ps1' # with a dollar
'.\test ''3'' [4] `5` $6.ps1' # all specials
)
# process each script in the list
foreach($script in $scripts) {
# make a command; mind &, ' around the path, and escaping '
$command = "& '" + $script.Replace("'", "''") + "'"
# invoke the command, i.e. the script in a separate process
powershell.exe -command $command
# check for the exit code (assuming 0 is for success)
if ($LastExitCode) {
# in this demo just write a warning
Write-Warning "Script $script failed."
}
else {
Write-Host "Script $script succeeded."
}
}
If you're on PowerShell 2.0 or higher, you can use jobs to do this. Each job runs in a separate PowerShell process e.g.:
$scripts = ".\script1.ps1", ".\script2.ps1"
$jobs = #()
foreach ($script in $scripts)
{
$jobs += Start-Job -FilePath $script
}
Wait-Job $jobs
foreach ($job in $jobs)
{
"*" * 60
"Status of '$($job.Command)' is $($job.State)"
"Script output:"
Receive-Job $job
}
Also, check out the PowerShell Community Extensions. It has a Test-Script command that can detect syntax errors in a script file. Of course, it won't catch runtime errors.
One tip for PowerShell V3 users: we (the PowerShell team) added a new API on the Runspace class called ResetRunspace(). This API resets the global variable table back to the initial state for that runspace (as well as cleaning up a few other things). What it doesn't do is clean out function definitions, types and format files or unload modules. This allows the API to be much faster. Also note that the Runspace has to have been created using an InitialSessionState object, not a RunspaceConfiguration instance. ResetRunspace() was added as part of the Workflow feature in V3 to support parallel execution efficiently in a script.
The two instances are totally separate, because they are two different processes. Generally, it is not the most efficient way to start a Powershell process for every script run. Depending on the number of scripts and how often you re-run them, it may be affecting your overall performance. If it's not, I would leave everything AS IS.
Another option would be to run in the same runspace (this is a correct word for it), but clean everything up every time. See this answer for a way to do it. Or use below extract:
$sysvars = get-variable | select -Expand name
function remove-uservars {
get-variable |
where {$sysvars -notcontains $_.name} |
remove-variable
}