I use Powershell 5.1 and we often install modules from our internal module repository hosted on our on-prem Azure Artifacts (we use Azure DevOps Server 2019).
The problem is that it is very slow. Takes regularly over 10 seconds to load one module. And this is not the network, which is pretty fast. It is the Install-Module internals.
I tried running Set-PSDebug -Trace 2 from within an Azure DevOps build in order to get line timestamps, but it is useless. For example, observe this output snippet:
2020-06-29T04:20:40.6944925Z DEBUG: 267+ switch ( >>>> $MsgID)
2020-06-29T04:20:40.6957451Z DEBUG: ! SET $switch = ''.
2020-06-29T04:20:40.6972578Z DEBUG: 290+ >>>> }
2020-06-29T04:20:40.6986528Z DEBUG: ! SET $switch = ''.
2020-06-29T04:20:40.6998323Z DEBUG: 232+ >>>> }
2020-06-29T04:20:48.3791151Z DEBUG: 220+ $script:PackageManagementInstallModuleMessageResolverScriptBlock = >>>> {
2020-06-29T04:20:48.3808676Z DEBUG: ! CALL function '<ScriptBlock>' (defined in file 'C:\Program
2020-06-29T04:20:48.3811147Z Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1')
2020-06-29T04:20:48.3822332Z DEBUG: 222+ >>>> $PackageTarget =
2020-06-29T04:20:48.3824673Z $LocalizedData.InstallModulewhatIfMessage
It show 8 seconds pause, but the place where it shows it does not make any sense.
So, my question is this - why is it so slow? Is there a way to profile it reliably?
EDIT 1
Have just installed PS Core 7 - the same lousy performance for Install-Module. My version of PowershellGet is:
C:\> Get-Module PowershellGet | Select Version
Version
-------
2.2.4
C:\>
EDIT 2
Found this page - https://learn.microsoft.com/en-us/powershell/scripting/gallery/how-to/working-with-packages/manual-download?view=powershell-7 It explicitly warns against simulating Install-Module with nuget, even though it explains how to do it. I would like to understand more about the implications of using nuget instead of Install-Module, besides it working 5 times faster (on average).
EDIT 3
The modules are not signed. We are talking about our internal modules. But installing modules from PSGallery, like Az.Accounts, Az.Storage and Az.Sql takes about the same time. When our build needs to make sure 5 modules are installed it takes easily a minute. On another note, Install-Module is not concurrency safe, so when our builds were running it bare we were facing all kinds of weird errors. They went away when we introduced explicit named mutex around Install-Module. Needless to say, it did not contribute to performance.
While this doesn't answer your "why", you might like to look at JustinGrote's high performance Powershell Gallery Module Installer:
.SYNOPSIS
High Performance Powershell Module Installation
.DESCRIPTION
This is a proof of concept for using the Powershell Gallery OData
API and HTTPClient to parallel install packages
It is also a demonstration of using async tasks in powershell
appropriately. Who says powershell can't be fast?
This drastically reduces the bandwidth/load against Powershell
Gallery by only requesting the required data
It also handles dependencies (via Nuget), checks for existing
packages, and caches already downloaded packages
.NOTES
THIS IS NOT FOR PRODUCTION, it should be considered "Fragile" and
has very little error handling and type safety
It also doesn't generate the PowershellGet XML files currently, so
PowershellGet will see them as "External" modules
It is indeed much faster.
Related
I have a Pull Request validation build running for different branches, i.e. several concurrent instances of the build is a very common thing.
One of the things the build does is install a module. Now I could modify the profile on the build agent and install module from there, but I want to avoid any extra build agent configuration. So, my build installs a module in the current user scope.
I noticed that Install-Module does not seem to be safe when invoked concurrently - it may fail with all kinds of different and weird error messages.
Now I solved this with a named mutex acquired before and released after, but this causes abysmal performance - the code sometimes waits for 30 seconds and more.
So, how to solve this problem? How to install a powershell module concurrently, but safely and with good performance?
EDIT 1
It is frustrating. I am trying to trace the concurrent installs using Set-PSDebug -Trace 2, but apparently Install-Module has a lot of Write-Debug invocations calling to functions which are themselves not safe for concurrent execution! So trying to trace actually worsens matters.
Apparently, Install-Module is totally NOT SAFE to run during a build where multiple builds run on the same agent. Looks like using a named Mutex is the safest approach.
EDIT 1
In a multi-threading environments invoking the following commands is not safe without explicit mutex:
Install-Module
Import-Module
Get-PSRepository without arguments
Maybe more. In my code I invoke all the three commands, and I discovered that all of them together must be in the same mutex, i.e. these combinations do not work:
Not working #1
$mtx.WaitOne()
try
{
Install-Module ...
}
finally
{
$mtx.ReleaseMutex()
}
Import-Module ...
Get-PSRepository ...
Not working #2
$mtx.WaitOne()
try
{
Install-Module ...
Import-Module ...
}
finally
{
$mtx.ReleaseMutex()
}
Get-PSRepository
The only safe option appears to be this one:
$mtx.WaitOne()
try
{
Install-Module ...
Import-Module ...
Get-PSRepository
}
finally
{
$mtx.ReleaseMutex()
}
Which is surprising, because I do not expect Install-Module or Import-Module to affect Get-PSRepository, yet somehow they do:
ParameterBindingException: A parameter cannot be found that matches parameter name 'Provider'.
at Get-PSRepository<Process>, C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1: line 4496
at Use-ModuleFB22C60E, C:\Users\mkharitonov\AppData\Local\Temp\fb22c60e-a0c5-48b3-953a-0b580c6a2f5e\m_deadbeef_.ps1: line 167
at <ScriptBlock>, <No file>: line 4
I have created a custom module as a PowerShell class following, roughly, the instructions available at Writing a custom DSC resource with PowerShell classes. The intent is to connect to Azure File Storage and download some files. I am using Azure Automation DSC as my pull server.
Let me start by saying that, when run through the PowerShell ISE, the code works a treat. Something goes wrong when I upload it to Azure though - I get the error Unable to find type [CloudFileDirectory]. This type specifier comes from assemblies referenced in through the module Azure.Storage which is definitely in my list of automation assets.
At the tippy top of my psm1 file I have
Using namespace Microsoft.WindowsAzure.Storage.File
[DscResource()]
class tAzureStorageFileSync
{
...
# Create the search context
[CloudFileDirectory] GetBlobRoot()
{
...
}
...
}
I'm not sure whether this Using is supported in this scenario or not, so let's call that Question 1
To date I have tried:
Adding RequiredModules = #( "Azure.Storage" ) to the psd1 file
Adding RequiredAssemblies = #( "Microsoft.WindowsAzure.Storage.dll" ) to the psd1 file
Shipping the actual Microsoft.WindowsAzure.Storage.dll file in the root of the module zip that I upload (that has a terrible smell about it)
When I deploy the module to Azure with New-AzureRmAutomationModule it uploads and processes just fine. The Extracting activities... step works and gives no errors.
When I compile a configuration, however, the compilation process fails with the Unable to find type error I mentioned.
I have contemplated adding an Import-Module Azure.Storage above the class declaration, but I've never seen that done anywhere else before.
Question 2 Is there a way I can compile locally using a similar process to the one used by Azure DSC so I can test changes more quickly?
Question 3 Does anyone know what is going wrong here?
Question 1/3:
If you create classes in powershell and use other classes within, ensure that these classes are present BEFORE loading the scriptfile that contains your new class.
I.e.:
Loader.ps1:
Import-Module Azure.Storage
. .\MyDSC-Class.ps1
Powershell checks if it finds all types you refer while interpreting the script, so all types must be loaded before that happens. You can do this by creating a scriptfile that loads all dependencies first and loads your script after that.
For question 2, if you register your machine as a hybrid worker you'll be able to run the script faster and compile locally. (For more details on hybrid workers, https://azure.microsoft.com/en-us/documentation/articles/automation-hybrid-runbook-worker/).
If you want an easy way to register the hybrid worker, you can run this script on your local machine (https://github.com/azureautomation/runbooks/blob/master/Utility/ARM/New-OnPremiseHybridWorker.ps1). Just make sure you have WMF 5 installed on your machine beforehand.
For authoring DSC configurations and testing locally, I would look at the Azure Automation ISE Add-On available on https://www.powershellgallery.com/packages/AzureAutomationAuthoringToolkit/0.2.3.6 You can install it by running the below command from an Administrator PowerShell ISE window.
Install-Module AzureAutomationAuthoringToolkit -Scope CurrentUser
For loading libraries, I have also noticed that I need to call import-module in order to be able to call methods. I need to do some research to determine the requirement for this. You can see an example I wrote to copy files from Azure Storage using a storage key up on https://github.com/azureautomation/modules/tree/master/cAzureStorage
As you probably don't want to have to deploy the storage library on all nodes, I included the storage library in the sample module above so that it will be automatically distributed to all nodes by the automation service.
Hope this helps,
Eamon
I am using Windows 7 as well as windows 2008 r2, I am trying to write a powershell script to find all the software installed on all the machines on my network. I have done research and see the cmdlets I need to do this task but I get some many unrecognized cmdlts. I am new to powershell and the other admins only use GUI's and I am trying to show them how powerful the command line can be. Is there something I need to run to update my machine with the latest cmdlets?
$PSVersionTable.PSVersion
Major Minor Build Revision
----- ----- ----- --------
3 0 -1 -1
currently the command that is failing is Get-RemoteProgram
I am using 64-bit machines
Assuming that you are using thisGet-RemoteProgram, you need to "dot source" it before you can use the command. This tells your script to read the file and include the functions it contains in your script.
. .\Get-RemoteProgram.ps1
Load the function into memory by dot-sourcing the script file this makes the Get-RemoteProgram function available in your current session of PowerShell
So your script would need to include
. .\Get-RemoteProgram.ps1
prior to any call to Get-RemoteProgram
As far as the version of PowerShell, 3.0 is certinaly not the latest. You can always find the latest version at Microsoft. Currently, https://msdn.microsoft.com/powershell is a good place to reference, or even check Wikipedia--lots of places are kept updated with the latest info on PowerShell.
Just installed management framework 5 production preview, my profile won't load anymore with the following error
Get-Module : Cannot load the module 'UserInfo.psm1' because the module nesting limit has been exceeded. Modules can only be nested
to 10 levels. Evaluate and change the order in which you are loading modules to prevent exceeding the nesting limit, and then try running your script again.
Is there a way to get more details?
I tried to trace execution with set-psdebug -trace 2, but I can't see the problem...
Found it, I had same modules loaded from .psd1 RequiredModules = #('coresmac','activedirectory') and from the .psm1 #requires -modules ActiveDirectory,userinfo .
As the requires instruction appears as a comment it was quite easy to miss it ...
I'm a nub scripter and am trying to write a really simple script to taskkill 2 programs and then uninstall 1 of them.
I wrote it in Powershell and stuck it in SCCM for deployment...however every time I deploy it, it's not running the last line to uninstall the program.
Here's the code:
# Closing Outlook instance
#
taskkill /IM outlook.exe /F
#
# Closing Linkpoint instance
#
taskkill /IM LinkPointAssist.exe /F
#
# Uninstalling Linkpoint via uninstall string if in Program Files
#
MsiExec.exe /X {DECDCD14-DEF6-49ED-9440-CC5E562FDC41} /qn
#
# Uninstalling Linkpoint via WmiObject if installed manually in AppData
Get-WmiObject -class win32_product -Filter "Name like '%Linkpoint%'" | ForEach-Object { $_.Uninstall()}
#
Exit
Can someone help? SCCM says the script completes with no error and I know it's able to execute it since the taskkills work...but it's not uninstalling the program.
Thanks in advance for any input.
So, SCCM is running this script, and nothing in the script is going to throw an error.
If you want to throw an error which SCCM can return to know how the deployment went, you need to add an extra step.
$result = Get-WmiObject -class win32_product -Filter "Name like '%Linkpoint%'" | ForEach-Object { $_.Uninstall()}
if ($result.ReturnValue -ne 0){
[System.Environment]::Exit(1603)
}else
{
[System.Environment]::Exit(0)
}
I see a lot of these kinds of questions come through on SO and SF: Someone struggling with unexpected behavior of an application, script, or ConfigMgr and very little information about the assumptions I can make about their environment. At that stage, it would typically be days of interaction to narrow the problem to a point where a solution is possible.
I'm hoping this answer can serve as a reference for future such questions. The first question to OP should be "Which of these 9 principles are you violating?" You could think of it as a sort of Joel Test for ConfigMgr application packaging.
Nine Steps to Better ConfigMgr Application Packages
I have found that installing and uninstalling applications reliably using ConfigMgr requires carefully sticking to a bunch of principles. I learned these principles the hard way. If you're struggling to figure out why an application is not working right under ConfigMgr, odds are that you will answer "no" to one of the following questions.
1. Are you testing the entire lifecycle?
In order to have any hope of reliably managing an application you need to test the entire lifecycle of an application. This is the sequence I test:
Detect: make sure the detection script result is negative
Install: install the application using your installation script
Detect: make sure the detection script result is positive when run
Uninstall: uninstall using your uninstallation script
I run this sequence repeatedly making tweaks to each step until the whole sequence is working.
2. Are you testing independently of ConfigMgr first?
Using ConfigMgr to test your application's lifecycle is slow and has its own ways of failing that can mask problems with your application package. The goal, then, is to be able to test an application's installation, detection, and uninstallation separate from but equivalent to the ConfigMgr client. In order to achieve that goal you end up with three separate scripts for each application:
Install-Application.bat - the entry point for your installation script
Detect-Application.ps1 - the script that detects whether the application is install
Uninstall-Application.bat - the entry point for your uninstallation script
Each of these three scripts can be invoked directly by either you or the ConfigMgr client. For applications installed as system you need to use psexec -s to invoke scripts in the same context as ConfigMgr (caveat).
3. Are you aware of context?
Installers can behave rather differently depending on the context they are invoked in. You need to consider whether an application is installed for a user or the system. If it is installed for the system, when you test independently of ConfigMgr, use psexec -s to invoke your script.
4. Are you aware of user interaction?
An installer can also behave rather differently depending on whether a user can interact with it. To test a script as system with user interaction, use psexec -i -s.
5. Did you match ConfigMgr to the tested context and user interaction?
Once you have the full lifecycle working, make sure you select the correct corresponding options for context (installed for user vs. system) and interaction (user can interact with application, or not). If you don't do this, the ConfigMgr client will be installing the application different from the way you tested, so you really can't expect success.
6. Are you aware of the possibility of application detection context mismatch?
The context that detection scripts run in depends on whether the application is deployed to users or systems. This means that in some cases the installation and detection contexts won't matched. Keep this in mind when you write your detection scripts.
7. Have you structured your scripts so that exit codes work?
ConfigMgr needs to see exit codes from your installation and uninstallation scripts in order to do the right thing. Installers signal failure or the need to reboot using exit codes. In order for exit codes to get to the ConfigMgr client you need to ensure that your install and uninstall scripts are structured correctly.
for batch scripts, use exit /b %errorlevel% to pass the exit code of your executable out to the ConfigMgr client
for PowerShell scripts, this is the only way I have seen work reliably
8. Are you using PowerShell scripts for detection?
ConfigMgr has a nice user interface for checking things like the presence of files, registry keys, etc as a proxy for whether an application is installed. The problem with that scheme is that there is no way to test application detection separately from and equivalent to the ConfigMgr client. If you want to test the application lifecycle independent of the ConfigMgr client (trust me, you want that), all your detection must occur using PowerShell scripts.
9. Have you structured your PowerShell detection scripts correctly?
The rules ConfigMgr uses to interpret the output of a PowerShell detection script are arcane. Thankfully, they are documented.