I want to do some housekeeping before executing any external console applications (setting some environment vars).
In my web research, it looks like overriding NotifyBeginApplication() in $host might do the trick. Unfortunately, I can't figure out how to do that.
Here's essentially what I want to do...
$host = $host | `
Add-Member -force -pass -mem scriptmethod NotifyBeginApplication `
{ $env:_startTime = [datetime]::now; $env:_line = $myInvocation.Line }
This doesn't work as $host is constant and it may be the wrong approach anyway.
The documentation that I've been able to find states that this function is called before any "legacy" console application is executed, but another blog entry says that it's only called for console applications that have no I/O redirection.
So, is this the right way to do this? If so, how would I override the function?
If not, how could this be done?
The only alternative I've seen that might work is to fully implement a custom PSHost. That seems possible with existing available source code, but beyond what I want to attempt.
If this is code that you can modify, then create this function:
Function call {
## Set environment here
$env:FOO = "BAR"
Invoke-Expression "$args"
}
Now pass your native commands to the call function. Examples:
call cmd /c set
call cmd /c dir
call somefunkyexe /blah -ooo aaah -some:`"Funky Argument`"
If this is code that you can't modify, then things will be complicated.
I also agree with your (unfortunate) conclusion that you will need to create your own custom host to handle this.
You can create additional runspaces easily enough via scripting, but this method isn't accessible in your currently running host (the default console).
Related
I'm running a script inside Invoke-Command. I have a global variable in my local session that got many configurations. I'm passing this object to Invoke-Command. How do I set a property inside that object in the remote session?
What you're trying to do fundamentally cannot work:
A script block that runs remotely of necessity runs in a different process (on a different machine).[1]
You can only pass a copy of a local object to a remotely executing script block - either via Invoke-Command's -ArgumentList parameter or via a $using: reference inside the script block, but whatever modifications you make to that copy are not seen by the caller.
The solution is to output (return) the data of interest from the remotely executed script block, which you can then (re)assign to a local variable on the caller's side.
Note: Due to the limitations of the XML-based serialization that PowerShell employs across process / computer boundaries, what is being passed and/or returned is situationally only a (method-less) emulation of the original object; that is, type fidelity cannot be guaranteed - see this answer for background information.
For example:
# Define a local object.
$localObject = [pscustomobject] #{ Prop = 42 }
# Pass a *copy* of that object to a remotely executing script block,
# modify it there, then *output* the *modified copy* and
# assign it back to the local variable.
$localObject =
Invoke-Command -ComputerName someServer {
$objectCopy = $using:localObject
# Modify the copy.
$objectCopy.Prop++
# Now *output* the modified copy.
$objectCopy
}
Note that while type fidelity is maintained in this simple example, the object returned is decorated with remoting-related properties, such as .PSComputerName - see this answer
[1] Passing objects as-is to a different runspace, so that updates to it are also seen by the caller, works only if the other runspace is a thread in the same process, such as when you use Start-ThreadJob and ForEach-Object -Parallel, and even then only if the object is an instance of a .NET reference type. Also, if there's a chance that multiple runspaces try to update the given object in parallel, you need to manage concurrency explicitly in order to avoid thread-safety issues.
I work on a PowerShell debugger implemented as a script, Add-Debugger.ps1.
It looks peculiar perhaps but there are use cases for it.
All works well except when the debugger stops at a breakpoint in a script module.
One of the debugger functions is to execute interactively typed user commands and show results.
The problem is that these commands are not invoked in the current script module scope but "somewhere else".
The problem may be worked around if the current module is known, say $module.
Then commands invoked as $module.Invoke() would work in the module scope.
But how to find/get this $module? That is the question.
NB $ExecutionContext.SessionState.Module does not seem to help, even if I get it using Get-Variable -Scope 1.
Because $ExecutionContext is for the DebuggerStop handler, not its parent in the module.
And Get-Variable is invoked "somewhere else" and does not get variables from the module.
I have found a workaround using this dynamically compiled piece of C#:
Add-Type #'
using System;
using System.Management.Automation;
public class AddDebuggerHelpers
{
public ScriptBlock DebuggerStopProxy;
public EventHandler<DebuggerStopEventArgs> DebuggerStopHandler { get { return OnDebuggerStop; } }
void OnDebuggerStop(object sender, DebuggerStopEventArgs e)
{
SessionState state = ((EngineIntrinsics)ScriptBlock.Create("$ExecutionContext").Invoke()[0].BaseObject).SessionState;
state.InvokeCommand.InvokeScript(false, DebuggerStopProxy, null, state.Module, e);
}
}
'#
It works and gives me the $module that I needed. By the way, for my
particular task I use it as $module.NewBoundScriptBlock(...) instead of
mentioned $module.Invoke(...), to preserve the current scope.
If another native PowerShell way without C# is found and posted, I'll accept it
as an answer even after accepting my own. Otherwise this workaround is the only known way so far.
I would like to come up with a mechanism by which I can share 'data' between different Powershell processes. This would be in order to implement a kind of job system, whereby a function can be run in one Powershell process, complete and then someone communicate its status to a function run from another (distinct) Powershell process...
I guess what I'd ideally like psjob results to be shareable between sessions, but this does not seem to be possible.
I can think of a few dirty ways of achieving this (like O/S environment variables), but am I missing an semi-elegant way?
For example:
Function giveMeNumber
{
$return_vlaue = Get-Random -Minimum -100 -Maximum 100
Return $return_vlaue
}
What are some ways i could get this function to store it's return somewhere and then grab it from another Powershell session (without using a database).
Cheers.
The QA mentioned by Keith refers to using MSMQ, a message queueing feature optionally available on desktop, mobile & server OS's from Microsoft.
It doesn't run by default on desktop OS's so you would have to ensure that the appropriate service was started. Seems like serious overkill to me unless you wanted something pretty beefy.
Of course, the most common choice for this type of task would be a simple shared file.
Alternatively, you could create a TCP listener in each of the jobs that you want to have accept external info. Not done this myself in PowerShell though I know it is possible. Node.JS would be a more familiar environment or Python. Seems like overkill if a shared file would do the job!
Another way would be to use the registry. Though you might consider that cheating since it is actually a database (of a very broken and simplistic sort).
I'm actually not sure that environment variables would work since I know that they can be picky about the parent environment scope (for example setting an env variable in a cmd doesn't make it available outside of the cmd scope by default.
UPDATE: Doh, missed a few! Some of them very obvious. Microsoft have a list:
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
Pipes was the one I was trying to remember. Windows sockets would be similar to a TCP listener.
I am calling a series of PowerShell functions from a master script (each function is a test).
I specify the tests in an XML file and I want them to run in order.
The functions to call are organized in PowerShell module files (.psm1). The master script calls Import-Module as needed and then calls the function via something like this...
$newResults = & "$runFunction" #ARGS
or this...
$newResults = Invoke-Expression $runFunctionWithArgs
I have gotten both to work just fine and the XML file parsing invokes these commands in the correct order.
Problem: The tests are apparently launched asynchronously so that the first test I launch does not necessarily get invoked and complete before the second test is invoked.
Note, the tests are functions in a PowerShell module and not commands so I do not think that Start-Process will work (but please tell me if you know how to make that work).
More Details:
It would take too much to add all the code, but essentially what each function call does is create a hashtable with one or more "TestResult" objects. "TestResult" has things like Success codes and a TimeStamp. Each test does things that take different amounts of time, but all synchronous. I would expect the timestamps to be the same order that I called each test, especially since the first thing each test does is get the timestamp so it should not depend on what the test does. When I run in the ISE, everything goes in order. When I run in the command window, the timestamps do not match my expected order.
Workaround:
My working theory is still that PowerShell is somehow parallelizing the calls. I can get consistent results by making the invocation of each call dependent on the results of the previous call. It is a dummy check because I know that what I test will always be true, but PowerShell doesn't know that
if ($newResults.Count -ne [Long]::MaxValue) { $newResults = & "$runFunction" #ARGS }
PowerShell thinks that it needs to know if the previous call count is not MaxValue.
I attempting to add some fairly limited PowerShell support in my application: I want the ability to periodically run a user-defined PowerShell script and show any output and (eventually) be able to handle progress notification and user-prompt requests. I don't need command-line-style interactive support, or (I think) remote access or the ability to run multiple simultaneous scripts, unless the user script does that itself from within the shell I host. I'll eventually want to run the script asynchronously or on a background thread, and probably seed the shell with some initial variables and maybe a cmdlet but that's as "fancy" as this feature is likely to get.
I've been reading the MSDN documentation about writing host application code, but while it happily explains how to create a PowerShell object, or Runspace, or RunspacePool, or Pipeline, there's no indication about why one would choose any of these approaches over another.
I think I'm down to one of these two, but I've like some feedback about which approach is a better one to take:
PowerShell shell = PowerShell.Create();
shell.AddCommand(/* set initial state here? */);
shell.AddStatement();
shell.AddScript(myScript);
shell.Invoke(/* can set host! */);
or:
Runspace runspace = RunspaceFactory.CreateRunspace(/* can set host and initial state! */);
PowerShell shell = PowerShell.Create();
shell.Runspace = runspace;
shell.AddScript(myScript);
shell.Invoke(/* can set host here, too! */);
(One of the required PSHost-derived class methods is EnterNestedPrompt(), and I don't know whether the user-defined script I run could cause that to get called or not. If it can, then I'll be responsible for "starting a new nested input loop" (as per here)... if that impacts which path to take above, that would also be good to know.)
Thanks!
What are they?
Pipeline
A Pipeline is a way to concatenate commands inside a powershell script. Example: You "pipe" the output from Get-ChildeItem to Where-Object with | to filter them:
Get-ChildItem | Where-Object {$_}
PowerShell Object
The PowerShell object referes to a powershell session, like the one you would get when you start powershell.exe.
Runspace
Every powershell session has its own runspace (You'll always get output from Get-Runspace). It defines the state of the powershell session. Hence the InitialSessionState object/property of a runspace. You may decide to create a new powershell session, with its own runspace from within a powershell, to enable a kind of multithreading.
RunspacePool
Last but not least the RunspacePool. Like the name says, it's a pool of runspaces (or powershell sessions) that can be used to process a lot of complecated tasks. As soon as one of the runspaces in the pool has finished its task it may take the next task till everything is done. (100 things to do with 10 runspaces: on avarage they process 10 each but one may process 8 while two others process 11...)
When to use what?
Pipeline
The pipeline is used insed of scripts. It makes it easier to build complex scripts and should be used as often as possible.
PowerShell Object
The powershell object is used when ever you need a new powershell session. You can create one inside of an existing script, be it C# or Powershell. It's usefull for easy multithreading. On its own it will create a default session.
Runspace
If you want to create a non standard session of powershell, you can manipulate the runspace object before you create a powershell session with it. It's usefull when you want to share synchronized variables, functions or classes in the extra runspaces. Slightly more complex multithreading.
RunspacePool
Like mentioned before it's a heavy tool for heavy work. When one execution of a script takes hours and you need to do it very often.E.g. In combination with remoting you could simultanly install something on every node of a big cluster and the like.
You are overthinking it. The code you show in samples is a good start. Now you just need to read the result of Invoke() and check the error and warning streams.
PowerShell host provides some hooks that RunSpace can use to communicate with user, like stream and format outputs, show progress, report errors, etc. For what you want to do you do not need PowerShell Host. You can read results back from script execution using PowerShell class, check for errors, warnings, read output streams and show notification to the user using facilities of your application. This will be much more straightforward and effective than write entire PowerShell host to show a message box if errors detected.
Also, PowerShell object HAS a Runspace when it is created, you do not need to give it one. If you need to retain the runspace to preserve the environment just keep entire PowerShell object and clear Commands and all Streams each time after you call Invoke.
The next question you should ask is how to process result of PowerShell::Invoke() and read PowerShell::Streams.