I'm running a script inside Invoke-Command. I have a global variable in my local session that got many configurations. I'm passing this object to Invoke-Command. How do I set a property inside that object in the remote session?
What you're trying to do fundamentally cannot work:
A script block that runs remotely of necessity runs in a different process (on a different machine).[1]
You can only pass a copy of a local object to a remotely executing script block - either via Invoke-Command's -ArgumentList parameter or via a $using: reference inside the script block, but whatever modifications you make to that copy are not seen by the caller.
The solution is to output (return) the data of interest from the remotely executed script block, which you can then (re)assign to a local variable on the caller's side.
Note: Due to the limitations of the XML-based serialization that PowerShell employs across process / computer boundaries, what is being passed and/or returned is situationally only a (method-less) emulation of the original object; that is, type fidelity cannot be guaranteed - see this answer for background information.
For example:
# Define a local object.
$localObject = [pscustomobject] #{ Prop = 42 }
# Pass a *copy* of that object to a remotely executing script block,
# modify it there, then *output* the *modified copy* and
# assign it back to the local variable.
$localObject =
Invoke-Command -ComputerName someServer {
$objectCopy = $using:localObject
# Modify the copy.
$objectCopy.Prop++
# Now *output* the modified copy.
$objectCopy
}
Note that while type fidelity is maintained in this simple example, the object returned is decorated with remoting-related properties, such as .PSComputerName - see this answer
[1] Passing objects as-is to a different runspace, so that updates to it are also seen by the caller, works only if the other runspace is a thread in the same process, such as when you use Start-ThreadJob and ForEach-Object -Parallel, and even then only if the object is an instance of a .NET reference type. Also, if there's a chance that multiple runspaces try to update the given object in parallel, you need to manage concurrency explicitly in order to avoid thread-safety issues.
Related
I am trying to get around invoke-webrequest's propensity to hang in memory and kill my entire script. So far I have written a script block using get-job which calls this from a foreach:
start-job -scriptblock {invoke-webrequest $using:varSiteVariable} -name jobTitle | out-null
I wait 10 seconds and then use receive-job to capture the output from the most recent job into a variable, which I then want to parse as a PowerShell HtmlWebResponseObject in the same manner I would if I were using invoke-webrequest directly. The logic behind this is that I will then throw script execution and return to square one if there is nothing to parse, as invoke-webrequest has clearly crashed again.
However, when I pull the data from jobTitle into a variable, it is always of type PsObject, meaning it lacks the crucial ParsedHtml method which I'm using to perform all of the further parsing of the HTML code; that method appears to belong specifically to objects of type HtmlWebResponseObject. There does not appear to be a way that I have found to force-cast the object into this type, nor any way to convert one into the other after the type.
I cannot simply define the variable from within the job and then refer to it outside of the job, as the two commands happen in different contexts and share no working space. I cannot write the data to a file as I am unable to import it back as the right data-type for the processing I need to perform.
Does anyone know how I can convert my PsObject data into HtmlWebResponseObject data?
I ended up fixing this with the help of this article:
https://gallery.technet.microsoft.com/Powershell-Tip-Parsing-49eb8810
I couldn't re-cast the data as HtmlWebResponseObject, but I was able to make a new COM Object of type HTMLFile and write the data from the variable grabbed from my job into that. The script needed to be slightly re-written but the all-important methods I was using to parse the data work as before.
At the end of my script I use 'ie' | ForEach-Object {Remove-Variable $_ -Force}. It works fine in PS 2 (Windows 7) but PS 5 (Windows 10) throws an error:
Cannot remove variable ie because the variable has been optimized and
is not removable. Try using the Remove-Variable cmdlet (without any
aliases), or dot-sourcing the command that you are using to remove the
variable.
How can I make it play nice with PS 5; or should I just use Remove-Variable 'ie' -Force?
The recommended way to remove COM objects is to call the ReleaseComObject method, passing the object reference ($ie) to the instance of your COM object.
Here is more detailed explanation and sample code from a Windows PowerShell Tip of the Week that shows how to get rid of COM objects:
Whenever you call a COM object from the common language runtime (which
happens to be the very thing you do when you call a COM object from
Windows PowerShell), that COM object is wrapped in a “runtime callable
wrapper,” and a reference count is incremented; that reference count
helps the CLR (common language runtime) keep track of which COM
objects are running, as well as how many COM objects are running. When
you start Excel from within Windows PowerShell, Excel gets packaged up
in a runtime callable wrapper, and the reference count is incremented
to 1.
That’s fine, except for one thing: when you call the Quit method and
terminate Excel, the CLR’s reference count does not get decremented
(that is, it doesn’t get reset back to 0). And because the reference
count is not 0, the CLR maintains its hold on the COM object: among
other things, that means that our object reference ($x) is still valid
and that the Excel.exe process continues to run. And that’s definitely
not a good thing; after all, if we wanted Excel to keep running we
probably wouldn’t have called the Quit method in the first place. ...
... calling the ReleaseComObject method [with] our
instance of Excel ... decrements the reference count for the object in
question. In this case, that means it’s going to change the reference
count for our instance of Excel from 1 to 0. And that is a good thing:
once the reference count reaches 0 the CLR releases its hold on the
object and the process terminates. (And this time it really does
terminate.)
$x = New-Object -com Excel.Application
$x.Visible = $True
Start-Sleep 5
$x.Quit()
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($x)
Remove-Variable x
The message "Cannot remove variable ie because the variable has been optimized and is not removable." you get, most likely means you have tried to access (inspect, watch, or otherwise access) a variable which has been already removed by the optimizer.
wp78de's helpful answer explains what you need to do to effectively release a COM object instantiated in PowerShell code with New-Object -ComObject.
Releasing the underlying COM object (which means terminating the process of a COM automation server such as Internet Explorer) is what matters most, but it's worth pointing out that:
Even without calling [System.Runtime.Interopservices.Marshal]::ReleaseComObject($ie) first, there's NO reason why your Remove-Variable call should fail (even though, if successful, it wouldn't by itself release the COM object).
I have no explanation for the error you're seeing (I cannot recreate it, but it may be related to this bug).
There's usually no good reason to use ForEach-Object with Remove-Variable, because you can not only pass one variable name directly, but even an array of names to the (implied) -Name parameter - see Remove-Variable -?;
Remove-Variable ie -Force should work.
Generally, note that -Force is only needed to remove read-only variables; if you want to (also) guard against the case where a variable by the specified name(s) doesn't exist, (also) use
-ErrorAction Ignore.
I am calling a series of PowerShell functions from a master script (each function is a test).
I specify the tests in an XML file and I want them to run in order.
The functions to call are organized in PowerShell module files (.psm1). The master script calls Import-Module as needed and then calls the function via something like this...
$newResults = & "$runFunction" #ARGS
or this...
$newResults = Invoke-Expression $runFunctionWithArgs
I have gotten both to work just fine and the XML file parsing invokes these commands in the correct order.
Problem: The tests are apparently launched asynchronously so that the first test I launch does not necessarily get invoked and complete before the second test is invoked.
Note, the tests are functions in a PowerShell module and not commands so I do not think that Start-Process will work (but please tell me if you know how to make that work).
More Details:
It would take too much to add all the code, but essentially what each function call does is create a hashtable with one or more "TestResult" objects. "TestResult" has things like Success codes and a TimeStamp. Each test does things that take different amounts of time, but all synchronous. I would expect the timestamps to be the same order that I called each test, especially since the first thing each test does is get the timestamp so it should not depend on what the test does. When I run in the ISE, everything goes in order. When I run in the command window, the timestamps do not match my expected order.
Workaround:
My working theory is still that PowerShell is somehow parallelizing the calls. I can get consistent results by making the invocation of each call dependent on the results of the previous call. It is a dummy check because I know that what I test will always be true, but PowerShell doesn't know that
if ($newResults.Count -ne [Long]::MaxValue) { $newResults = & "$runFunction" #ARGS }
PowerShell thinks that it needs to know if the previous call count is not MaxValue.
I attempting to add some fairly limited PowerShell support in my application: I want the ability to periodically run a user-defined PowerShell script and show any output and (eventually) be able to handle progress notification and user-prompt requests. I don't need command-line-style interactive support, or (I think) remote access or the ability to run multiple simultaneous scripts, unless the user script does that itself from within the shell I host. I'll eventually want to run the script asynchronously or on a background thread, and probably seed the shell with some initial variables and maybe a cmdlet but that's as "fancy" as this feature is likely to get.
I've been reading the MSDN documentation about writing host application code, but while it happily explains how to create a PowerShell object, or Runspace, or RunspacePool, or Pipeline, there's no indication about why one would choose any of these approaches over another.
I think I'm down to one of these two, but I've like some feedback about which approach is a better one to take:
PowerShell shell = PowerShell.Create();
shell.AddCommand(/* set initial state here? */);
shell.AddStatement();
shell.AddScript(myScript);
shell.Invoke(/* can set host! */);
or:
Runspace runspace = RunspaceFactory.CreateRunspace(/* can set host and initial state! */);
PowerShell shell = PowerShell.Create();
shell.Runspace = runspace;
shell.AddScript(myScript);
shell.Invoke(/* can set host here, too! */);
(One of the required PSHost-derived class methods is EnterNestedPrompt(), and I don't know whether the user-defined script I run could cause that to get called or not. If it can, then I'll be responsible for "starting a new nested input loop" (as per here)... if that impacts which path to take above, that would also be good to know.)
Thanks!
What are they?
Pipeline
A Pipeline is a way to concatenate commands inside a powershell script. Example: You "pipe" the output from Get-ChildeItem to Where-Object with | to filter them:
Get-ChildItem | Where-Object {$_}
PowerShell Object
The PowerShell object referes to a powershell session, like the one you would get when you start powershell.exe.
Runspace
Every powershell session has its own runspace (You'll always get output from Get-Runspace). It defines the state of the powershell session. Hence the InitialSessionState object/property of a runspace. You may decide to create a new powershell session, with its own runspace from within a powershell, to enable a kind of multithreading.
RunspacePool
Last but not least the RunspacePool. Like the name says, it's a pool of runspaces (or powershell sessions) that can be used to process a lot of complecated tasks. As soon as one of the runspaces in the pool has finished its task it may take the next task till everything is done. (100 things to do with 10 runspaces: on avarage they process 10 each but one may process 8 while two others process 11...)
When to use what?
Pipeline
The pipeline is used insed of scripts. It makes it easier to build complex scripts and should be used as often as possible.
PowerShell Object
The powershell object is used when ever you need a new powershell session. You can create one inside of an existing script, be it C# or Powershell. It's usefull for easy multithreading. On its own it will create a default session.
Runspace
If you want to create a non standard session of powershell, you can manipulate the runspace object before you create a powershell session with it. It's usefull when you want to share synchronized variables, functions or classes in the extra runspaces. Slightly more complex multithreading.
RunspacePool
Like mentioned before it's a heavy tool for heavy work. When one execution of a script takes hours and you need to do it very often.E.g. In combination with remoting you could simultanly install something on every node of a big cluster and the like.
You are overthinking it. The code you show in samples is a good start. Now you just need to read the result of Invoke() and check the error and warning streams.
PowerShell host provides some hooks that RunSpace can use to communicate with user, like stream and format outputs, show progress, report errors, etc. For what you want to do you do not need PowerShell Host. You can read results back from script execution using PowerShell class, check for errors, warnings, read output streams and show notification to the user using facilities of your application. This will be much more straightforward and effective than write entire PowerShell host to show a message box if errors detected.
Also, PowerShell object HAS a Runspace when it is created, you do not need to give it one. If you need to retain the runspace to preserve the environment just keep entire PowerShell object and clear Commands and all Streams each time after you call Invoke.
The next question you should ask is how to process result of PowerShell::Invoke() and read PowerShell::Streams.
I want to do some housekeeping before executing any external console applications (setting some environment vars).
In my web research, it looks like overriding NotifyBeginApplication() in $host might do the trick. Unfortunately, I can't figure out how to do that.
Here's essentially what I want to do...
$host = $host | `
Add-Member -force -pass -mem scriptmethod NotifyBeginApplication `
{ $env:_startTime = [datetime]::now; $env:_line = $myInvocation.Line }
This doesn't work as $host is constant and it may be the wrong approach anyway.
The documentation that I've been able to find states that this function is called before any "legacy" console application is executed, but another blog entry says that it's only called for console applications that have no I/O redirection.
So, is this the right way to do this? If so, how would I override the function?
If not, how could this be done?
The only alternative I've seen that might work is to fully implement a custom PSHost. That seems possible with existing available source code, but beyond what I want to attempt.
If this is code that you can modify, then create this function:
Function call {
## Set environment here
$env:FOO = "BAR"
Invoke-Expression "$args"
}
Now pass your native commands to the call function. Examples:
call cmd /c set
call cmd /c dir
call somefunkyexe /blah -ooo aaah -some:`"Funky Argument`"
If this is code that you can't modify, then things will be complicated.
I also agree with your (unfortunate) conclusion that you will need to create your own custom host to handle this.
You can create additional runspaces easily enough via scripting, but this method isn't accessible in your currently running host (the default console).