Powershell Get-Job file name or extension is too long - powershell

I have a PS script that collects server metrics. It works absolutely fine when I run it as a single threaded. However when I set it up to run multi threaded, I receieve the below error.
An error occurred while starting the background process. Error reported: The filename or extension is too long.
$jobs=$serverList|%{ Start-Job -InitializationScript $functions -ArgumentList $_ -ScriptBlock {
Script Line-1
Script Line-2
Script Line-3
Script Line-4
Script Line-5
Script Line-6
Script Line-7
}
$functions has 10 functions,totaling about 350 lines.
Based on the research that I have done this error comes up because the initialization script has too many lines of script.Let me know how do I fix this without truncating any script.
First Update-
I was able to resolve the issue by reducing the size of the scriptblock. Based on this Site we can only have about 12 bytes of data being sent in script block. Pretty weird issue. Any solution on how to overcome this size limitations.
For now the issue is resolved by removing unwanted spaces and additonal logs written. However if there is a need to modify the script further this solution may not hold good. Looking forward for few suggestions.

Related

PowerShell Self-Updating Script

We have a PowerShell script to continually monitor a folder for new JSON files and upload them to Azure. We have this script saved on a shared folder so that multiple people can run this script simultaneously for redundancy. Each person's computer has a scheduled task to run it at login so that the script is always running.
I wanted to update the script, but then I would have had to ask each person to stop their running script and restart it. This is especially troublesome since we eventually want to run this script in "hidden" mode so that no one accidentally closes out the window.
So I wondered if I could create a script that updates itself automatically. I came up with the code below and when this script is run and a new version of the script is saved, I expected the running PowerShell window to to close when it hit the Exit command and then reopen a new window to run the new version of the script. However, that didn't happen.
It continues along without a blip. It doesn't close the current window and it even keeps the output from old versions of the script on the screen. It's as if PowerShell doesn't really Exit, it just figures out what's happening and keeps going on with the new version of the script. I'm wondering why this is happening? I like it, I just don't understand it.
#Place at top of script
$lastWriteTimeOfThisScriptWhenItFirstStarted = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
#Continuous loop to keep this script running
While($true) {
Start-Sleep 3 #seconds
#Run this script, change the text below, and save this script
#and the PowerShell window stays open and starts running the new version without a hitch
"Hi"
$lastWriteTimeOfThisScriptNow = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
if($lastWriteTimeOfThisScriptWhenItFirstStarted -ne $lastWriteTimeOfThisScriptNow) {
. $PSCommandPath
Exit
}
}
Interesting Side Note
I decided to see what would happen if my computer lost connection to the shared folder where the script was running from. It continues to run, but presents an error message every 3 seconds as expected. But, it will often revert back to an older version of the script when the network connection is restored.
So if I change "Hi" to "Hello" in the script and save it, "Hello" starts appearing as expected. If I unplug my network cable for a while, I soon get error messages as expected. But then when I plug the cable back in, the script will often start outputting "Hi" again even though the newly saved version has "Hello" in it. I guess this is a negative side-effect of the fact that the script never truly exits when it hits the Exit command.
. $PSCommand is a blocking (synchronous) call, which means that Exit on the next line isn't executed until $PSCommand has itself exited.
Given that $PSCommand here is your script, which never exits (even though it seemingly does), the Exit statement is never reached (assuming that the new version of the script keeps the same fundamental while loop logic).
While this approach works in principle, there are caveats:
You're using ., the "dot-sourcing" operator, which means the script's new content is loaded into the current scope (and generally you always remain in the same process, as you always do when you invoke a *.ps1 file, whether with . or (the implied) regular call operator, &).
While variables / functions / aliases from the new script then replace the old ones in the current scope, old definitions that you've since removed from the new version of the script would linger and potentially cause unwanted side-effects.
As you observe yourself, your self-updating mechanism will break if the new script contains a syntax error that causes it to exit, because the Exit statement then is reached, and nothing is left running.
That said, you could use that as a mechanism to detect failure to invoke the new version:
Use try { . $ProfilePath } catch { Write-Error $_ } instead of just . $ProfilePath
and instead of the Exit command, issue a warning (or do whatever is appropriate to alert someone of the failure) and then keep looping (continue), which means the old script stays in effect until a valid new one is found.
Even with the above, the fundamental constraint of this approach is that you may exceed the maximum call-recursion depth. The nested . invocations pile up, and when the nesting limit is reached, you won't
be able to perform another, and you're stuck in a loop of futile retries.
That said, as of Windows PowerShell v5.1 this limit appears to be around 4900 nested calls, so if you never expect the script to be updated that frequently while a given user session is active (a reboot / logoff would start over), this may not be a concern.
Alternative approach:
A more robust approach would be to create a separate watchdog script whose sole purpose is to monitor for new versions, kill the old running script and start the new one, with an alert mechanism for when starting the new script fails.
Another option is to have the main script have "stages" where it runs command based on the name of the highest revision script in a folder. I think mklement0's watchdog is a genious idea though.
But what I'm referring to is doing what you do but use variables as your command and those variables get updated with the highest number script name. This way you just drop 10.ps1 into the folder and it will ignore 9.ps1. And the function in that script would be named mainfunction10 etc...
Something like
$command = ((get-childitem c:\path\to\scriptfolder\).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
The files would have to be named alphabetically from oldest to newest. Otherwise you'll have to sort-object by date.
$command = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
Or . Source instead of using it as a command. And then have the later code call the functions like function$command and the function would be the name of the script
I still like the watch dog idea more.
The watchdog would look sort of like
While ($true) {
$new = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).fullname)[-1]
If ($old -ne $new){
Kill $old
Sleep 10
& $new
}
$old -eq $new
Sleep 600
}
Mind you I'm not certain how the scripts are ran and you may need to seek instances of powershell based on the command used to start it.
$kill = ((WMIC path win32_process get Caption,Processid,Commandline).where({$_.commandline -contains $command})).processid
Kill $kill
Would replace kill $old
This command is an educated guess and untested.
Other tricks would be running the main script from the watchdog as a job. Getting the job Id. And then checking for file changes. If the new file comes in, the watch dog could kill the job Id and repeating the whole process
You could also just have the script end. And have a windows job every 10 mins just rerun the script. And that way you just have whatever script just run every ten minutes. This is more intense per startup though.
Instead of exit you could use break to kill the loop. And the script will exit naturally
You can use test-connection to check for the server. But if it's every 3 seconds. That's a lot if pings from a lot of computers

Powershell to EXE tool Advice

So here's the deal. Because of a number of... let's just say not PowerShell smart people who will be using an incredibly complex application that I just finished, I need the ability to package it in an exe wrapper.
This shouldn't be that hard
I was able to successfully use PS2EXE, except for some reason with AD, it throws out a whooooole bunch of AD text that I can't get rid of. Tried to fix that for a few days before getting frustrated and moving on.
Then, I discovered PowerGUI. I can't say that I like it, at all. However, its compiler was exactly what I was looking for! Except for the fact that Exchange 2010 snap-ins are not compatible with .NET 4.5 through this application.
I want to make it very clear that my script works perfectly on multiple different computers, but as soon as I use any of these tools, everything breaks.
An exe is the best thing that I can think of to simplify the interface, and keep the Technically Intellectually Stunted from breaking everything, or running to me with every little error because they somehow got into the code and typed something and saved it, and now nothing works and it's the end of the world and they have no idea what happened.
If you guys know of any tools to wrap this up into an exe, or have any other ideas on how to help, I would really appreciate anything you guys can give me.
You have never failed me in the past!
From my point of view if you really want an EXE file you should write a .NET application, it's not so hard to embed PowerShell CmdLets.
In order to avoid end user modifying your code I know two solutions :
First : set execution policy to AllSigned on the user computer and sign the scripts you deploy. You can manage to use our own certificates (not expensive at all) or public certificates (more expensive). One of the drawback of this solution is that it does not prevent users from seeing the code. Another big drawback is that a PKI and sign code infrastructure is a lot of wast time.
Second : for non interactive scripts (be carefull it's a kind of makeshift job) :
Create a new user account
Only allow access to the script file for the new account.
Set up a task in the Windows scheduler to run that script file with PowerShell under that specific account. The permissions for the scheduled tasks allow read and execute access to the user(s). Then set the task to "disabled".
Whenever the script file needs to be run, the corresponding task is manually started by the user.
Using this solution will also allow you to remote execute your script.
When I had a similar deployment problem - 1) user's didn't know powershell 2) I didn't want them to have to understand things like execution policy, 3) how to start PS, 4) etc. I wrapped it in a batch file. I also wanted to make sure that experienced PS users still had the capabilities of PS, so the batch file determined if it was running under PS or not and ran in the current PS session if applicable. I was never too worried that users would mess with the script - they were happy if it "just worked". So whether users liked Explorer, CMD.EXE, or PS, they all were accommodated.
The batch file I wrote first runs a bit of powershell code to determine if the process of the batch file is the grandchild of a powershell process. If it is then the batch file is being invoked from PS. The execution policy is also checked and if it is lenient enough then Wscript.SendKeys is used to send keystrokes to PS to get the script running in the current PS session. If it isn't then it starts a new PS session using -ExecutionPolicy parameter and passes the script as a command line argument (-Command).
This bit of powershell code communicates back to the .CMD file using a return code. Sorry it's cryptic, but the length of command line parameters is limited. Here's the code:
set scr= $mp=[diagnostics.process]::getcurrentprocess().id
set scr=%scr%; $pp=([wmi]\"win32_process.handle='$mp'\").parentprocessid
set scr=%scr%; $gp=([wmi]\"win32_process.handle='$pp'\").parentprocessid
set scr=%scr%; $ep=[int][microsoft.powershell.executionpolicy](get-executionpolicy)
set scr=%scr%; try {$pnp=1-[int](([wmi]\"win32_process.handle='$gp'\").Name -eq \"powershell.exe\")
set scr=%scr%; } catch {$pnp=1}
set scr=%scr%; $ev = (8 * $pnp + $ep) -band 0xB; %wo% pp: $pp gp: $gp ev: $ev; if ($ev -le 1) {
set scr=%scr% %wo% Launching within existing powershell session...`n;
set scr=%scr% $w=new-object -com wscript.shell;$null=$w.appactivate($gp);
set scr=%scr%; $w.sendkeys(\"^&{{}`$st =cat "%me%";`$sc=`$st -join [char]10 -split 'rem PS script';
set scr=%scr% `$script:myArgs = `\" %*`\";`$sb=[scriptblock]::create{(} `$sc[3]{)};. `$sb{}}~\")
set scr=%scr%; }
set scr=%scr%; exit $ev
powershell -noprofile -Command %scr%
%wo% is to allow debugging this "checker script". If debugging is on the %wo% is set to write-host. Otherwise it is set to define a "null" function and then invoke the null function. The null doesn't do anything so the message that is the argument to the function is not output.
Note the escaping when invoking SendKeys. ^ is the CMD.EXE escape character and SendKeys has it's own escape mechanism, as does PS.
If run from PS you end up in a PS session thanks to SendKeys. Otherwise the batch file does this:
set scr= ren function:prompt prompto
set scr=%scr%; function prompt{ 'myApp: '+(prompto)}
set scr=%scr%; $st= (cat %me%) -join \"`n\";
set scr=%scr%; $sx=($st -split 'rem PS script')
set scr=%scr%; $sc=$sx[3]
set scr=%scr%; %wo% myArgs: $myArgs script length: $sc.length
set scr=%scr%; ^&{$script:myArgs=\"%*\"; iex $sc}
title MyApp
rem Change the number of lines on the console if currently set to 25
for /f "tokens=2" %%i in ('mode con^|findstr Lines:') do if %%i LEQ 25 (mode con lines=50&color 5F)
powershell -noexit -noprofile -command "%scr%"
This "helper script" also can't be too long. So the helper script reads the original .CMD file and then splits it by using the string 'rem PS script'. That string will be in both this helper script as well as in the batch file (separating the batch file statements from PS statements). In my case the string is also in the batch file comments, so that is why the index of 3 is used.
Your PS script can define functions or a module. Your PS script can also output some introductory info to explain to users how to get started, how to get help, or whatever you want.
Rather than just using the PS command line, your PS script could create it's own interactive environment (using Read-Host for example). However I didn't want to do that because it would have prevented experienced PS users from using their knowledge about PS. For example if your script requires a username/password, an experienced PS user could use get-credential to create a credential to send to your script.

wusa silent install error

I am trying to automate updating Powershell on Windows 7 using Windows6.1-KB2506143-x64.msu, and having a heck of a time. The following code works fine in a standalone ps1 file. And it works in my main ps1 file. But when run from a module it fails with exit code -2145124341. This is in PS v2, where negative exit codes are handled wrong, so that number is perhaps useless, and FWIW I have a good 40 other installers of various types that work from this module. However, this is my first attempt at automating msu files, so maybe there is a known interaction here that I haven't discovered yet? There's thousands of lines of code between the root ps1 file where this works and the module where it doesn't, so tracking down what is triggering the error is going to be a beast without some sort of trail to follow at the very least. So, anyone have an idea where I should start?
$filePath = 'wusa.exe'
$argumentList = '"\\PX_SERVER\Rollouts\Microsoft\Windows6.1-KB2506143-x64.msu" /quiet /norestart'
$exitCode = (Start-Process -filePath:$filePath -argumentList:$argumentList -wait -errorAction:stop -passThru).exitCode
Also, running wusa.exe leaves some detritus in the script folder, but only when it is run from the module. Is this an issue with the msu file, or just a bug in wusa? Or does it point at what is causing the issue perhaps?
I had hoped to get this update to work to enable some new features, but between not being able to automate and garbage being left behind, I am very close to abandoning that path and juts continuing to target v2. But hopefully someone can point me in the right direction as that is not my preferred solution at all.
a few toughts on first reading :
The ArgumentList parameter for Start-process needs an ARRAY to work well :
$argumentList = #( "\\PX_SERVER\Rollouts\Microsoft\Windows6.1-KB2506143-x64.msu", "/quiet", "/norestart" )
wusa.exe takes a log parameter : /log:c:\fso\install.log can you had it to your script for this particular package to check what happens ?
a powershell script trying to update powershell ... I'm not quite sure this is meant to work ... it's the only case in wich i'll backup on another scrpting language (people, please correct me if i'm wrong ... )
Please let me know the result of the wusa.exe /log command, thanks

Powershell config to force a batch file to run within the powershell window?

I've got a powershell script that eventually passes a stack of arguments into a batch file via invoke-expression command.
However, on one server, when the powershell scripts executes that batch file, that batch file opens in a new window, but on the other server, the batch file executes within the powershell window.
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
So my question is... does anybody know why the behaviours are different between the two servers, and how to get the batch file to execute in the powershell window? I'm thinking it's a configuration thing, but can't actually find anything that tells me how to make it do what I want it to do.....
Thanks!
--edit--
I'm currently just piping the line straight through like this:
E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)
Previously, it was
$invokeString = 'E:\Software\ibm\WebSphere\AppServer\bin\wsadmin -lang jython -username $($username) -password $($password) -f "F:\Custom\dumpAllThreads.py" $($servers)'
$output = invoke-expression $invokeString
Both had the same behaviour.
So my question is... does anybody know why the behaviours are different between the two servers
Most often I've seen this sort of thing related to how a scripts is called. If the same user is logged on multiple times on the same server (i.e., console and RDP) then the window might appear in a different session. Similarly, if the script runs as a scheduled task and the user that runs the task isn't the user logged on, the window will never be visible. If the same user is logged on, it might be visible.
how to get the batch file to execute in the powershell window?
You could try Start-Process with -NoNewWindow, as #Paul mentions.
However....
What that means, is that I've got a sleep interval that is starting once the batch file begins executing in the new window, and thus screwing up my timings, unlike the other server, where the sleep interval doesn't begin until after the batch file has finished executing.
It sounds like your actual problem is that your code has a race condition. You should fix the actual problem. Use Start-Process with the -Wait parameter, or use the jobs system in PowerShell.

Get-Content with wait parameter doesn't update in Powershell

I'm trying to monitor a file using Get-Content $path -wait in Windows Powershell V3.0. Sometimes when I execute this command line in Powershell it will function as expected. But sometimes it will only execute (or at least it seems like) get-content but without the -wait parameter. Even though the file get's updated it won't be shown in Powershell. If I cancel the command and rerun it it will show the updated file content.
What do I need to do?
EDIT: It seems to update blocks after a while. But it's not real-time really.
Not allowed to comment (don't have a 50 reputation), so have to give an Answer....
Less for Windows (http://gnuwin32.sourceforge.net/packages/less.htm) with the +F or Shift-F option (http://www.commandlinefu.com/commands/view/1024/make-less-behave-like-tail-f.) showed updated file content where PowerShell "get-content $path -wait" did not.