I work on a PowerShell debugger implemented as a script, Add-Debugger.ps1.
It looks peculiar perhaps but there are use cases for it.
All works well except when the debugger stops at a breakpoint in a script module.
One of the debugger functions is to execute interactively typed user commands and show results.
The problem is that these commands are not invoked in the current script module scope but "somewhere else".
The problem may be worked around if the current module is known, say $module.
Then commands invoked as $module.Invoke() would work in the module scope.
But how to find/get this $module? That is the question.
NB $ExecutionContext.SessionState.Module does not seem to help, even if I get it using Get-Variable -Scope 1.
Because $ExecutionContext is for the DebuggerStop handler, not its parent in the module.
And Get-Variable is invoked "somewhere else" and does not get variables from the module.
I have found a workaround using this dynamically compiled piece of C#:
Add-Type #'
using System;
using System.Management.Automation;
public class AddDebuggerHelpers
{
public ScriptBlock DebuggerStopProxy;
public EventHandler<DebuggerStopEventArgs> DebuggerStopHandler { get { return OnDebuggerStop; } }
void OnDebuggerStop(object sender, DebuggerStopEventArgs e)
{
SessionState state = ((EngineIntrinsics)ScriptBlock.Create("$ExecutionContext").Invoke()[0].BaseObject).SessionState;
state.InvokeCommand.InvokeScript(false, DebuggerStopProxy, null, state.Module, e);
}
}
'#
It works and gives me the $module that I needed. By the way, for my
particular task I use it as $module.NewBoundScriptBlock(...) instead of
mentioned $module.Invoke(...), to preserve the current scope.
If another native PowerShell way without C# is found and posted, I'll accept it
as an answer even after accepting my own. Otherwise this workaround is the only known way so far.
Related
This is an extremely strange issue- I have some Powershell code and a XAML GUI I made with Visual Studio for a fairly basic app for users. I'm fairly experienced with Powershell, but completely new to XAML.
Here's the code snippet:
`
$var_RKH.Add_Click({
$targetSite = $RKH
$var_TargetSite.Content = "Rock Hill"
})
`
Somehow, $targetSite is still null after this click event. I've confirmed the var $RKH contains the needed data. In another click event on another button, I copy-pasted that same line and it works. However, what's absolutely twisted my brain is- the third line works. The $var_TargetSite.Content variable is correctly updated.
So the click event is definitely firing, the second line just gets completely skipped somehow, and the third line works fine. On other click events, the exact same code works fine. I must be missing something very simple because this is absolutely twisting my brain.
Script blocks aren't really scopes. They're script blocks and do not inherit from enclosing scope. Instead, you pass values into it just like you would for a thread in C#. This can get confusing because both scopes and script blocks get the {} delimiter -- they're not the same though and not interchangeable either:
[scriptblock]$Sb = {'This is a test'}
if($true) $sb
will throw a ParserError exception rather than print 'this is a test'.
Please be very careful about "sideloading" variables, especially with WPF; there is a high degree of multithread in any WPF application and you literally don't know who is going to access what when.
If you can, try to resolve the value of $RKH inside the script block rather than try to force it in.
Or you could implement code-behind for the XAML and put both .xaml and .xaml.cs in an assembly, then use that in PS.
For reasons I'll not get into here, I'm being forced to use powershell to call and then populate a separate application. And to integrate it into a batch file, so powershell -command "& { }", which is already painful. I've got a while loop set call, check for the process ID to come up, then wait and call again if it hasn't come up yet.
The problem here is that afterwards I utilize a static member out of visualbasic to switch the focus to that application.
Namely, [microsoft.visualbasic.interaction]::AppActivate($hwnd) -- where $hwnd is the process ID of the application in question.
I hate to put anything like an artificial timer on there to wait for the application to finish loading, and I'd love to just put a while timer in there. But static member calls don't appear to support -erroraction or -errorvariable -- and the try {} catch {} appears to just ignore the error, as I was hoping to use it to trigger a flag to trigger the while loop to cycle again after a sleep of one second.
What other ways are there to catch errors out of a static member operator ::
Disregard. The Try {} catch {} works great if I don't replace the final bracket with a close parenthesis.
Instead of calling a function at the end of all scripts to perform cleanup tasks, I'm looking to register for an 'on return' event for when the script (not the PowerShell session) is finished.
A script can return at various points though (eg, no records to process), so the current situation is problematic.
Register-EngineEvent applies to the PowerShell session, and operators run scripts manually, thus it's problematic.
I can't find a list of built-in powershell events or an alternative solution.
#Vesper wrote it as a comment, but a try/finally block is definitely what I would suggest for this:
try {
# some code
} finally {
# this gets executed even if the code in the try block throws an exception
}
I am calling a series of PowerShell functions from a master script (each function is a test).
I specify the tests in an XML file and I want them to run in order.
The functions to call are organized in PowerShell module files (.psm1). The master script calls Import-Module as needed and then calls the function via something like this...
$newResults = & "$runFunction" #ARGS
or this...
$newResults = Invoke-Expression $runFunctionWithArgs
I have gotten both to work just fine and the XML file parsing invokes these commands in the correct order.
Problem: The tests are apparently launched asynchronously so that the first test I launch does not necessarily get invoked and complete before the second test is invoked.
Note, the tests are functions in a PowerShell module and not commands so I do not think that Start-Process will work (but please tell me if you know how to make that work).
More Details:
It would take too much to add all the code, but essentially what each function call does is create a hashtable with one or more "TestResult" objects. "TestResult" has things like Success codes and a TimeStamp. Each test does things that take different amounts of time, but all synchronous. I would expect the timestamps to be the same order that I called each test, especially since the first thing each test does is get the timestamp so it should not depend on what the test does. When I run in the ISE, everything goes in order. When I run in the command window, the timestamps do not match my expected order.
Workaround:
My working theory is still that PowerShell is somehow parallelizing the calls. I can get consistent results by making the invocation of each call dependent on the results of the previous call. It is a dummy check because I know that what I test will always be true, but PowerShell doesn't know that
if ($newResults.Count -ne [Long]::MaxValue) { $newResults = & "$runFunction" #ARGS }
PowerShell thinks that it needs to know if the previous call count is not MaxValue.
I want to do some housekeeping before executing any external console applications (setting some environment vars).
In my web research, it looks like overriding NotifyBeginApplication() in $host might do the trick. Unfortunately, I can't figure out how to do that.
Here's essentially what I want to do...
$host = $host | `
Add-Member -force -pass -mem scriptmethod NotifyBeginApplication `
{ $env:_startTime = [datetime]::now; $env:_line = $myInvocation.Line }
This doesn't work as $host is constant and it may be the wrong approach anyway.
The documentation that I've been able to find states that this function is called before any "legacy" console application is executed, but another blog entry says that it's only called for console applications that have no I/O redirection.
So, is this the right way to do this? If so, how would I override the function?
If not, how could this be done?
The only alternative I've seen that might work is to fully implement a custom PSHost. That seems possible with existing available source code, but beyond what I want to attempt.
If this is code that you can modify, then create this function:
Function call {
## Set environment here
$env:FOO = "BAR"
Invoke-Expression "$args"
}
Now pass your native commands to the call function. Examples:
call cmd /c set
call cmd /c dir
call somefunkyexe /blah -ooo aaah -some:`"Funky Argument`"
If this is code that you can't modify, then things will be complicated.
I also agree with your (unfortunate) conclusion that you will need to create your own custom host to handle this.
You can create additional runspaces easily enough via scripting, but this method isn't accessible in your currently running host (the default console).