I have been picking at this issue for a few weeks now with no resolve or solution found on the webs. When I run a Dell BIOS update with /s and /f during a new Windows 10 21H1 build task sequence, the update runs successfully with the BIOS update log showing error 2 that a reboot is needed to perform the BIOS update upon reboot. So the next step in the task sequence I perform a reboot but the BIOS never does the update, it just boots into Windows. I tried this from command line, PowerShell and as an application with the reboot box checked. All the ways I run this the log says ready to update on reboot but never does. I can get the update to work if I manually perform the reboot by using the mouse before MDT reboots it. This actually performs the update at the reboot as it should! However this of course creates a dirty environment and MDT is grumpy.
This happens on all different Dell builds I try that are only one or two steps newer. We currently use PDQ to run the updates. When I call the install from there, this too works fine. We want to move away from PDQ to a free solution such as just straight from MDT. I found many different ways people have performed this via task sequence and no mention of this hiccup. What I seem to be running into is MDT is removing whatever the BIOS is putting into the boot sequence so it never gets performed. I've tried different credentials, Dell's flash64w.exe and too much to list. Things seem to work until reboot. I'm stumped.
Sample of simple working PowerShell:
# Get model of system to be updated
$Model = (gwmi Win32_ComputerSystem).Model
Write-Host "Model Found: $($Model)"
# Get root folder where BIOS for model is stored
$BIOSRoot = "Z:\Applications\BIOSUpdates\Dell\$Model"
Write-Host "BIOSRoot: $($BIOSRoot)"
# Get path with BIOS executable and list of arguments
$BIOSFile = Get-Childitem -Path "$BIOSRoot" -Include *.exe -Recurse -ErrorAction SilentlyContinue
#Write-Host "BIOSFile: $($BIOSFile)"
$ARGS = #('/s', '/f')
Write-Host "BIOSFile and arguments: $($BIOSFile) $($ARGS)"
#Start BIOS Update with completed path
Start-Process "$BIOSFile" -ArgumentList "$ARGS" -Wait
Is anyone else having this show stopping issue?
So after much testing, the answer is to run the Start-Process at the end of my code twice. Why? I have no idea. I accidentally had it looping and it worked on the second loop. It just needed to run twice. I thought maybe it just needed more time. I put a sleep at end of the code, but time is not what it wanted. Very bizarre.
Related
I have slow PowerShell console startup times (always more than 5 second wait) and was hoping for advice on troubleshooting steps to find out where the bottlenecks might be?
I have read that for running scripts, -NoProfile is important to prevent Modules etc loading, but how, in general, should we approach finding out what is slowing things down? I don't have many Modules installed and I know that since PowerShell 3.0, Modules are just referenced at startup and not fully loaded (a Module is only fully loaded when a function from a given Module is invoked) so I just can't understand why it takes 5+ seconds to start a bare console (my $profile also is empty).
Any advice on various steps that I can look at to debug the console startup process would be appreciated? Also, are there maybe some Microsoft or third-party tools that exist to debug the various steps in the console startup process to look for bottlenecks?
When PowerShell starts to become slow at startup, an update of the .NET framework might be the cause.
To speed up again, use ngen.exe on PowerShell's assemblies.
It generate native images for an assembly and its dependencies and install them in the Native Images Cache.
Run this as Administrator
$env:PATH = [Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory()
[AppDomain]::CurrentDomain.GetAssemblies() | ForEach-Object {
$path = $_.Location
if ($path) {
$name = Split-Path $path -Leaf
Write-Host -ForegroundColor Yellow "`r`nRunning ngen.exe on '$name'"
ngen.exe install $path /nologo
}
}
Hope that helps
Step 1: Stop using PowerShell.
Now, seriously, something that needs ~13 seconds (YMMV) on an quad-core i7 cpu to launch off an ssd drive is an abomination of software architecture.
But yes, I hear you, "no viable alternative" etc...
... but if forced, bribed or blackmailed to still use it, check if your Windows has DNS cache service enabled.
For me, with DNS cache disabled and powershell executable firewalled, the built-in 5.1.19041.906 version starts quickly, but the new pwsh 7.1.4 would take around 13 seconds to get responsive to keyboard input under the same circumstances. It's so desperate to call home that it would just synchronously wait for some network timeout while blocking all user input, as if threads were a thing for the weak.
My resolution was to stick with the olden powershell 5.
My work computer stored the main profile on a remote server. Another minor problem was that it imported duplicate modules from 4 different profile.ps1 files.
Use the following commands to see where your profiles and modules are stored. Delete the unnecessary profile.ps1 and move all your modules into one directory.
echo $env:PSModulePath
$profile | select *
My loading time was reduced from 21000ms to 1300ms.
Found this solution when I googled having the same problem, but in 2022. Unfortunately this did not fix my issue.
Our systems have a group policy security requirement to "Turn on PowerShell Transcription". The policy requires that we specify "the Transcript output directory to point to a Central Log Server or another secure location". The server name changed and no one updated the policy. As soon as I updated the GPO with the new location, PowerShell opened instantly again.
Press Windows+R
Type %temp% and hit enter
C+A & SHIFT+DEL
That should do it
Ok, so I have a problem currently where I am trying to remotely deploy and run exe files remotely. I have a Ninite installation of just a simple 7zip on it, which I wanna install, and for now I have gotten so close that it even executes the file, but it never finishes the installation. I can see it in the Task Manager, but it never makes any progress, nor does it use any CPU, only a small bit of memory.
The code looks like this:
Copy-item -path C:\windows\temp\installer.exe -destination 'c:\users\administrator\desktop\installer -tosession
start-sleep -seconds 2
invoke-command -computername MyPc -scriptblock { $(& cmd c/ 'c:\users\administrator\desktop\installer\installer.exe' -silent -wait -passthru | select Exitcode }
$session | remove-possession
this runs, but it never stops running. the script just constantly runs without ever finishing. If I go and look in the Task Manager, I can clearly see both(?) the installation files going on somewhere, but it doesn't do anything but just sit there and never finishes. I have now tried to let it sit for about 15 minutes (the installation takes about 2-3 min at max) and still, no progress.
Any ideas as to what could cause this? or atleast how to fix it and let it finish. this is the last thing I need to get done in my project and this is done and I am so tired of this already.
NOTE: I have found out that if I have ANY kind of parameters (-wait, -passthru, etc) then it will become stuck forever with 0 CPU usage, even if i just have argumentlist attached with 0 arguments it will do the same, but if i have nothing but the command in the scriptblock, then it will go through, say "done master" and then be the biggest liar because the software has not been installed, it just closed the installer instead.
I have slow PowerShell console startup times (always more than 5 second wait) and was hoping for advice on troubleshooting steps to find out where the bottlenecks might be?
I have read that for running scripts, -NoProfile is important to prevent Modules etc loading, but how, in general, should we approach finding out what is slowing things down? I don't have many Modules installed and I know that since PowerShell 3.0, Modules are just referenced at startup and not fully loaded (a Module is only fully loaded when a function from a given Module is invoked) so I just can't understand why it takes 5+ seconds to start a bare console (my $profile also is empty).
Any advice on various steps that I can look at to debug the console startup process would be appreciated? Also, are there maybe some Microsoft or third-party tools that exist to debug the various steps in the console startup process to look for bottlenecks?
When PowerShell starts to become slow at startup, an update of the .NET framework might be the cause.
To speed up again, use ngen.exe on PowerShell's assemblies.
It generate native images for an assembly and its dependencies and install them in the Native Images Cache.
Run this as Administrator
$env:PATH = [Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory()
[AppDomain]::CurrentDomain.GetAssemblies() | ForEach-Object {
$path = $_.Location
if ($path) {
$name = Split-Path $path -Leaf
Write-Host -ForegroundColor Yellow "`r`nRunning ngen.exe on '$name'"
ngen.exe install $path /nologo
}
}
Hope that helps
Step 1: Stop using PowerShell.
Now, seriously, something that needs ~13 seconds (YMMV) on an quad-core i7 cpu to launch off an ssd drive is an abomination of software architecture.
But yes, I hear you, "no viable alternative" etc...
... but if forced, bribed or blackmailed to still use it, check if your Windows has DNS cache service enabled.
For me, with DNS cache disabled and powershell executable firewalled, the built-in 5.1.19041.906 version starts quickly, but the new pwsh 7.1.4 would take around 13 seconds to get responsive to keyboard input under the same circumstances. It's so desperate to call home that it would just synchronously wait for some network timeout while blocking all user input, as if threads were a thing for the weak.
My resolution was to stick with the olden powershell 5.
My work computer stored the main profile on a remote server. Another minor problem was that it imported duplicate modules from 4 different profile.ps1 files.
Use the following commands to see where your profiles and modules are stored. Delete the unnecessary profile.ps1 and move all your modules into one directory.
echo $env:PSModulePath
$profile | select *
My loading time was reduced from 21000ms to 1300ms.
Found this solution when I googled having the same problem, but in 2022. Unfortunately this did not fix my issue.
Our systems have a group policy security requirement to "Turn on PowerShell Transcription". The policy requires that we specify "the Transcript output directory to point to a Central Log Server or another secure location". The server name changed and no one updated the policy. As soon as I updated the GPO with the new location, PowerShell opened instantly again.
Press Windows+R
Type %temp% and hit enter
C+A & SHIFT+DEL
That should do it
We have a PowerShell script to continually monitor a folder for new JSON files and upload them to Azure. We have this script saved on a shared folder so that multiple people can run this script simultaneously for redundancy. Each person's computer has a scheduled task to run it at login so that the script is always running.
I wanted to update the script, but then I would have had to ask each person to stop their running script and restart it. This is especially troublesome since we eventually want to run this script in "hidden" mode so that no one accidentally closes out the window.
So I wondered if I could create a script that updates itself automatically. I came up with the code below and when this script is run and a new version of the script is saved, I expected the running PowerShell window to to close when it hit the Exit command and then reopen a new window to run the new version of the script. However, that didn't happen.
It continues along without a blip. It doesn't close the current window and it even keeps the output from old versions of the script on the screen. It's as if PowerShell doesn't really Exit, it just figures out what's happening and keeps going on with the new version of the script. I'm wondering why this is happening? I like it, I just don't understand it.
#Place at top of script
$lastWriteTimeOfThisScriptWhenItFirstStarted = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
#Continuous loop to keep this script running
While($true) {
Start-Sleep 3 #seconds
#Run this script, change the text below, and save this script
#and the PowerShell window stays open and starts running the new version without a hitch
"Hi"
$lastWriteTimeOfThisScriptNow = [datetime](Get-ItemProperty -Path $PSCommandPath -Name LastWriteTime).LastWriteTime
if($lastWriteTimeOfThisScriptWhenItFirstStarted -ne $lastWriteTimeOfThisScriptNow) {
. $PSCommandPath
Exit
}
}
Interesting Side Note
I decided to see what would happen if my computer lost connection to the shared folder where the script was running from. It continues to run, but presents an error message every 3 seconds as expected. But, it will often revert back to an older version of the script when the network connection is restored.
So if I change "Hi" to "Hello" in the script and save it, "Hello" starts appearing as expected. If I unplug my network cable for a while, I soon get error messages as expected. But then when I plug the cable back in, the script will often start outputting "Hi" again even though the newly saved version has "Hello" in it. I guess this is a negative side-effect of the fact that the script never truly exits when it hits the Exit command.
. $PSCommand is a blocking (synchronous) call, which means that Exit on the next line isn't executed until $PSCommand has itself exited.
Given that $PSCommand here is your script, which never exits (even though it seemingly does), the Exit statement is never reached (assuming that the new version of the script keeps the same fundamental while loop logic).
While this approach works in principle, there are caveats:
You're using ., the "dot-sourcing" operator, which means the script's new content is loaded into the current scope (and generally you always remain in the same process, as you always do when you invoke a *.ps1 file, whether with . or (the implied) regular call operator, &).
While variables / functions / aliases from the new script then replace the old ones in the current scope, old definitions that you've since removed from the new version of the script would linger and potentially cause unwanted side-effects.
As you observe yourself, your self-updating mechanism will break if the new script contains a syntax error that causes it to exit, because the Exit statement then is reached, and nothing is left running.
That said, you could use that as a mechanism to detect failure to invoke the new version:
Use try { . $ProfilePath } catch { Write-Error $_ } instead of just . $ProfilePath
and instead of the Exit command, issue a warning (or do whatever is appropriate to alert someone of the failure) and then keep looping (continue), which means the old script stays in effect until a valid new one is found.
Even with the above, the fundamental constraint of this approach is that you may exceed the maximum call-recursion depth. The nested . invocations pile up, and when the nesting limit is reached, you won't
be able to perform another, and you're stuck in a loop of futile retries.
That said, as of Windows PowerShell v5.1 this limit appears to be around 4900 nested calls, so if you never expect the script to be updated that frequently while a given user session is active (a reboot / logoff would start over), this may not be a concern.
Alternative approach:
A more robust approach would be to create a separate watchdog script whose sole purpose is to monitor for new versions, kill the old running script and start the new one, with an alert mechanism for when starting the new script fails.
Another option is to have the main script have "stages" where it runs command based on the name of the highest revision script in a folder. I think mklement0's watchdog is a genious idea though.
But what I'm referring to is doing what you do but use variables as your command and those variables get updated with the highest number script name. This way you just drop 10.ps1 into the folder and it will ignore 9.ps1. And the function in that script would be named mainfunction10 etc...
Something like
$command = ((get-childitem c:\path\to\scriptfolder\).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
The files would have to be named alphabetically from oldest to newest. Otherwise you'll have to sort-object by date.
$command = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).basename)[-1]
& "C:\path\to\scruptfolder\\$command"
Or . Source instead of using it as a command. And then have the later code call the functions like function$command and the function would be the name of the script
I still like the watch dog idea more.
The watchdog would look sort of like
While ($true) {
$new = ((get-childitem c:\path\to\scriptfolder\ | sort-object -Property lastwritetime).fullname)[-1]
If ($old -ne $new){
Kill $old
Sleep 10
& $new
}
$old -eq $new
Sleep 600
}
Mind you I'm not certain how the scripts are ran and you may need to seek instances of powershell based on the command used to start it.
$kill = ((WMIC path win32_process get Caption,Processid,Commandline).where({$_.commandline -contains $command})).processid
Kill $kill
Would replace kill $old
This command is an educated guess and untested.
Other tricks would be running the main script from the watchdog as a job. Getting the job Id. And then checking for file changes. If the new file comes in, the watch dog could kill the job Id and repeating the whole process
You could also just have the script end. And have a windows job every 10 mins just rerun the script. And that way you just have whatever script just run every ten minutes. This is more intense per startup though.
Instead of exit you could use break to kill the loop. And the script will exit naturally
You can use test-connection to check for the server. But if it's every 3 seconds. That's a lot if pings from a lot of computers
I am trying to automate updating Powershell on Windows 7 using Windows6.1-KB2506143-x64.msu, and having a heck of a time. The following code works fine in a standalone ps1 file. And it works in my main ps1 file. But when run from a module it fails with exit code -2145124341. This is in PS v2, where negative exit codes are handled wrong, so that number is perhaps useless, and FWIW I have a good 40 other installers of various types that work from this module. However, this is my first attempt at automating msu files, so maybe there is a known interaction here that I haven't discovered yet? There's thousands of lines of code between the root ps1 file where this works and the module where it doesn't, so tracking down what is triggering the error is going to be a beast without some sort of trail to follow at the very least. So, anyone have an idea where I should start?
$filePath = 'wusa.exe'
$argumentList = '"\\PX_SERVER\Rollouts\Microsoft\Windows6.1-KB2506143-x64.msu" /quiet /norestart'
$exitCode = (Start-Process -filePath:$filePath -argumentList:$argumentList -wait -errorAction:stop -passThru).exitCode
Also, running wusa.exe leaves some detritus in the script folder, but only when it is run from the module. Is this an issue with the msu file, or just a bug in wusa? Or does it point at what is causing the issue perhaps?
I had hoped to get this update to work to enable some new features, but between not being able to automate and garbage being left behind, I am very close to abandoning that path and juts continuing to target v2. But hopefully someone can point me in the right direction as that is not my preferred solution at all.
a few toughts on first reading :
The ArgumentList parameter for Start-process needs an ARRAY to work well :
$argumentList = #( "\\PX_SERVER\Rollouts\Microsoft\Windows6.1-KB2506143-x64.msu", "/quiet", "/norestart" )
wusa.exe takes a log parameter : /log:c:\fso\install.log can you had it to your script for this particular package to check what happens ?
a powershell script trying to update powershell ... I'm not quite sure this is meant to work ... it's the only case in wich i'll backup on another scrpting language (people, please correct me if i'm wrong ... )
Please let me know the result of the wusa.exe /log command, thanks