I attempting to add some fairly limited PowerShell support in my application: I want the ability to periodically run a user-defined PowerShell script and show any output and (eventually) be able to handle progress notification and user-prompt requests. I don't need command-line-style interactive support, or (I think) remote access or the ability to run multiple simultaneous scripts, unless the user script does that itself from within the shell I host. I'll eventually want to run the script asynchronously or on a background thread, and probably seed the shell with some initial variables and maybe a cmdlet but that's as "fancy" as this feature is likely to get.
I've been reading the MSDN documentation about writing host application code, but while it happily explains how to create a PowerShell object, or Runspace, or RunspacePool, or Pipeline, there's no indication about why one would choose any of these approaches over another.
I think I'm down to one of these two, but I've like some feedback about which approach is a better one to take:
PowerShell shell = PowerShell.Create();
shell.AddCommand(/* set initial state here? */);
shell.AddStatement();
shell.AddScript(myScript);
shell.Invoke(/* can set host! */);
or:
Runspace runspace = RunspaceFactory.CreateRunspace(/* can set host and initial state! */);
PowerShell shell = PowerShell.Create();
shell.Runspace = runspace;
shell.AddScript(myScript);
shell.Invoke(/* can set host here, too! */);
(One of the required PSHost-derived class methods is EnterNestedPrompt(), and I don't know whether the user-defined script I run could cause that to get called or not. If it can, then I'll be responsible for "starting a new nested input loop" (as per here)... if that impacts which path to take above, that would also be good to know.)
Thanks!
What are they?
Pipeline
A Pipeline is a way to concatenate commands inside a powershell script. Example: You "pipe" the output from Get-ChildeItem to Where-Object with | to filter them:
Get-ChildItem | Where-Object {$_}
PowerShell Object
The PowerShell object referes to a powershell session, like the one you would get when you start powershell.exe.
Runspace
Every powershell session has its own runspace (You'll always get output from Get-Runspace). It defines the state of the powershell session. Hence the InitialSessionState object/property of a runspace. You may decide to create a new powershell session, with its own runspace from within a powershell, to enable a kind of multithreading.
RunspacePool
Last but not least the RunspacePool. Like the name says, it's a pool of runspaces (or powershell sessions) that can be used to process a lot of complecated tasks. As soon as one of the runspaces in the pool has finished its task it may take the next task till everything is done. (100 things to do with 10 runspaces: on avarage they process 10 each but one may process 8 while two others process 11...)
When to use what?
Pipeline
The pipeline is used insed of scripts. It makes it easier to build complex scripts and should be used as often as possible.
PowerShell Object
The powershell object is used when ever you need a new powershell session. You can create one inside of an existing script, be it C# or Powershell. It's usefull for easy multithreading. On its own it will create a default session.
Runspace
If you want to create a non standard session of powershell, you can manipulate the runspace object before you create a powershell session with it. It's usefull when you want to share synchronized variables, functions or classes in the extra runspaces. Slightly more complex multithreading.
RunspacePool
Like mentioned before it's a heavy tool for heavy work. When one execution of a script takes hours and you need to do it very often.E.g. In combination with remoting you could simultanly install something on every node of a big cluster and the like.
You are overthinking it. The code you show in samples is a good start. Now you just need to read the result of Invoke() and check the error and warning streams.
PowerShell host provides some hooks that RunSpace can use to communicate with user, like stream and format outputs, show progress, report errors, etc. For what you want to do you do not need PowerShell Host. You can read results back from script execution using PowerShell class, check for errors, warnings, read output streams and show notification to the user using facilities of your application. This will be much more straightforward and effective than write entire PowerShell host to show a message box if errors detected.
Also, PowerShell object HAS a Runspace when it is created, you do not need to give it one. If you need to retain the runspace to preserve the environment just keep entire PowerShell object and clear Commands and all Streams each time after you call Invoke.
The next question you should ask is how to process result of PowerShell::Invoke() and read PowerShell::Streams.
Related
I'm running a script inside Invoke-Command. I have a global variable in my local session that got many configurations. I'm passing this object to Invoke-Command. How do I set a property inside that object in the remote session?
What you're trying to do fundamentally cannot work:
A script block that runs remotely of necessity runs in a different process (on a different machine).[1]
You can only pass a copy of a local object to a remotely executing script block - either via Invoke-Command's -ArgumentList parameter or via a $using: reference inside the script block, but whatever modifications you make to that copy are not seen by the caller.
The solution is to output (return) the data of interest from the remotely executed script block, which you can then (re)assign to a local variable on the caller's side.
Note: Due to the limitations of the XML-based serialization that PowerShell employs across process / computer boundaries, what is being passed and/or returned is situationally only a (method-less) emulation of the original object; that is, type fidelity cannot be guaranteed - see this answer for background information.
For example:
# Define a local object.
$localObject = [pscustomobject] #{ Prop = 42 }
# Pass a *copy* of that object to a remotely executing script block,
# modify it there, then *output* the *modified copy* and
# assign it back to the local variable.
$localObject =
Invoke-Command -ComputerName someServer {
$objectCopy = $using:localObject
# Modify the copy.
$objectCopy.Prop++
# Now *output* the modified copy.
$objectCopy
}
Note that while type fidelity is maintained in this simple example, the object returned is decorated with remoting-related properties, such as .PSComputerName - see this answer
[1] Passing objects as-is to a different runspace, so that updates to it are also seen by the caller, works only if the other runspace is a thread in the same process, such as when you use Start-ThreadJob and ForEach-Object -Parallel, and even then only if the object is an instance of a .NET reference type. Also, if there's a chance that multiple runspaces try to update the given object in parallel, you need to manage concurrency explicitly in order to avoid thread-safety issues.
I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.
I have a many human task. After start the process i want to update some process variable value with rest API call that's relates to current task. If anyone knows how to do that commend bellow.
I try with /execute this is only start the task then how to update the process variable for already started process instance?
Based on the documentation
Here is how to do update the process variable. However, this will update an entire process rather than only that specific task.
server/containers/{id}/processes/instances/{pInstanceId}/variables - POST
If you want to update process variable from a task, you should do it during task completion. However, this requires you to have output variables from that task. Otherwise, it won't take any effect.
server/containers/{id}/tasks/{tInstanceId}/states/completed - PUT
Anyway, the full documentation of rest can be viewed in
{localhost}:{port}/kie-server/docs
I would like to come up with a mechanism by which I can share 'data' between different Powershell processes. This would be in order to implement a kind of job system, whereby a function can be run in one Powershell process, complete and then someone communicate its status to a function run from another (distinct) Powershell process...
I guess what I'd ideally like psjob results to be shareable between sessions, but this does not seem to be possible.
I can think of a few dirty ways of achieving this (like O/S environment variables), but am I missing an semi-elegant way?
For example:
Function giveMeNumber
{
$return_vlaue = Get-Random -Minimum -100 -Maximum 100
Return $return_vlaue
}
What are some ways i could get this function to store it's return somewhere and then grab it from another Powershell session (without using a database).
Cheers.
The QA mentioned by Keith refers to using MSMQ, a message queueing feature optionally available on desktop, mobile & server OS's from Microsoft.
It doesn't run by default on desktop OS's so you would have to ensure that the appropriate service was started. Seems like serious overkill to me unless you wanted something pretty beefy.
Of course, the most common choice for this type of task would be a simple shared file.
Alternatively, you could create a TCP listener in each of the jobs that you want to have accept external info. Not done this myself in PowerShell though I know it is possible. Node.JS would be a more familiar environment or Python. Seems like overkill if a shared file would do the job!
Another way would be to use the registry. Though you might consider that cheating since it is actually a database (of a very broken and simplistic sort).
I'm actually not sure that environment variables would work since I know that they can be picky about the parent environment scope (for example setting an env variable in a cmd doesn't make it available outside of the cmd scope by default.
UPDATE: Doh, missed a few! Some of them very obvious. Microsoft have a list:
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
Pipes was the one I was trying to remember. Windows sockets would be similar to a TCP listener.
I am calling a series of PowerShell functions from a master script (each function is a test).
I specify the tests in an XML file and I want them to run in order.
The functions to call are organized in PowerShell module files (.psm1). The master script calls Import-Module as needed and then calls the function via something like this...
$newResults = & "$runFunction" #ARGS
or this...
$newResults = Invoke-Expression $runFunctionWithArgs
I have gotten both to work just fine and the XML file parsing invokes these commands in the correct order.
Problem: The tests are apparently launched asynchronously so that the first test I launch does not necessarily get invoked and complete before the second test is invoked.
Note, the tests are functions in a PowerShell module and not commands so I do not think that Start-Process will work (but please tell me if you know how to make that work).
More Details:
It would take too much to add all the code, but essentially what each function call does is create a hashtable with one or more "TestResult" objects. "TestResult" has things like Success codes and a TimeStamp. Each test does things that take different amounts of time, but all synchronous. I would expect the timestamps to be the same order that I called each test, especially since the first thing each test does is get the timestamp so it should not depend on what the test does. When I run in the ISE, everything goes in order. When I run in the command window, the timestamps do not match my expected order.
Workaround:
My working theory is still that PowerShell is somehow parallelizing the calls. I can get consistent results by making the invocation of each call dependent on the results of the previous call. It is a dummy check because I know that what I test will always be true, but PowerShell doesn't know that
if ($newResults.Count -ne [Long]::MaxValue) { $newResults = & "$runFunction" #ARGS }
PowerShell thinks that it needs to know if the previous call count is not MaxValue.