I've got a Powershell module which I'm needing to load into a Runspace for use by different threads. I understand that defining a SessionState allows me to load modules in that can then be accessed by the runspace:
$SessionState = [System.Management.Automation.Runspaces.InitialSessionState]::Create()
$SessionState.ImportPSModulesFromPath("$filepath\Validation Library.psd1")
$runspacePool = [runspacefactory]::CreateRunspacePool($SessionState)
The problem is that I can't seem to get ImportPSModulesFromPath to do anything - it doesn't return any errors, yet the $SessionState.Modules collection is always empty, and my Runspace keeps returning errors saying it can't find the functions in the module, even though they're defined properly in the psd1 and do work if I load normally using Import-Module.
The psd1 file contains the module definition pointing to a psm1 file in the same folder (I get the same behaviour when pointing directly to the psm1)
After lots of searching and testing, I was unable to determine anything wrong with the script - until I closed and reopened my Powershell session.
Though I had been taking care to dispose and remove variables I thought were being used, I had been running the same scripts with the same variables repeatedly, and so something incorrect had been stored down into one of my variables, causing the module load to fail.
Running the same script in a new session showed that the module was loading successfully.
Related
I created a module, that imports nested modules and it doesn't work right if I set the default command prefix.
One of the recommendations I found on the web was to make the module easier to maintain by creating new folders and separate files for the modules.
So I have
Module.psm1
Common\Common.psm1
Get-Data
Company\Company.psm1
Get-Company
I am also trying to use the Default Command Prefix option as well. When I add that, and then try and use the module it breaks.
Without Command prefix, I'm able to call get-company, which is able to call get-data and everything is happen.
When I ADD command prefix, I can no longer call Get-Data. If I move the Function Get-Company to the Module file then it is able to call Get-Data.
Am I doing something wrong here, is this an odd bug, is this expected behavior, Details on the Default Command Prefix are very sparse here:How to Write a PowerShell Module Manifest
When I call the following code:
Start-Process Firefox
Then the PowerShell opens the browser. I can do that with several other programs and it works. My question is: How does the PowerShell know which program to open if I type Firefox? I mean, Im not using a concrete Path or something ...
I though it has something to do with the environment variables ... But I cannot find any variable there which is called Firefox ... How can he know?
I traced two halves of it, but I can't make them meet in the middle.
Process Monitor shows it checks the PATHs, and eventually checks HKLM\Software\Microsoft\Windows\CurrentVersion\App Paths\firefox.exe so that's my answer to how it finds the install location, and then runs it.
That registry key is for Application Registration which says:
When the ShellExecuteEx function is called with the name of an executable file in its lpFile parameter, there are several places where the function looks for the file. We recommend registering your application in the App Paths registry subkey.
The file is sought in the following locations:
The current working directory.
The Windows directory only (no subdirectories are searched).
The Windows\System32 directory.
Directories listed in the PATH environment variable.
Recommended: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths
That implies PowerShell calls the Windows ShellExecuteEx function, which finds FireFox as a registered application, or it tries the same kind of searching itself internally.
Going the other way to try and confirm that, the Start-Process cmdlet has a parameter set called UseShellExecute. The 'Notes' of that help says:
This cmdlet is implemented by using the Start method of the System.Diagnostics.Process class. For more information about this method, see Process.Start Method
Trying to trace through the source code on GitHub:
Here is the PowerShell source code for Start-Process.
Here, at this line it tries to look up the $FilePath parameter with CommandDiscovery.LookupCommandInfo.
Here it checks else if (ParameterSetName.Equals("UseShellExecute"))
Then Here is the .Start() function which either starts it with ShellExecute or Process.Start()
OK, not sure if ShellExecute and ShellExecuteEx behave the same, but it could be PS calling Windows, which is doing the search for "FireFox".
That CommandSearcher.LookupCommandInfo comes in here and follows to TryNormalSearch() which is implemented here and immediately starts a CommandSearcher which has a state machine for the things it will search
SearchState.SearchingAliases
Functions
CmdLets
SearchingBuiltinScripts
StartSearchingForExternalCommands
PowerShellPathResolution
QualifiedFileSystemPath
and there I get lost. I can't follow it any further right now.
Either it shortcuts straight to Windows doing the lookup
Or the PowerShell CommandSearcher does the same search somehow
Or the PowerShell CommandSearcher in some way runs out of searching, and the whole thing falls back to asking Windows search.
The fact that Process Monitor only logs one query in each PATH folder for "firefox.*" and then goes to the registry key suggests it's not doing this one, or I'd expect many more lookups.
The fact that it logs one query for "get-firefox.*" in each PATH folder suggests it is PowerShell's command searcher doing the lookup and not Windows.
Hmm.
Using Process Monitor I was able to trace the PowerShell.
It first searches the $env:path variable, then the $profile variable.
In my case firefox wasn't found and then it searches through a whole lot in the Registry and somehow finds it.
It might have something to do with how firefox is installed on the system.
I have created a custom module as a PowerShell class following, roughly, the instructions available at Writing a custom DSC resource with PowerShell classes. The intent is to connect to Azure File Storage and download some files. I am using Azure Automation DSC as my pull server.
Let me start by saying that, when run through the PowerShell ISE, the code works a treat. Something goes wrong when I upload it to Azure though - I get the error Unable to find type [CloudFileDirectory]. This type specifier comes from assemblies referenced in through the module Azure.Storage which is definitely in my list of automation assets.
At the tippy top of my psm1 file I have
Using namespace Microsoft.WindowsAzure.Storage.File
[DscResource()]
class tAzureStorageFileSync
{
...
# Create the search context
[CloudFileDirectory] GetBlobRoot()
{
...
}
...
}
I'm not sure whether this Using is supported in this scenario or not, so let's call that Question 1
To date I have tried:
Adding RequiredModules = #( "Azure.Storage" ) to the psd1 file
Adding RequiredAssemblies = #( "Microsoft.WindowsAzure.Storage.dll" ) to the psd1 file
Shipping the actual Microsoft.WindowsAzure.Storage.dll file in the root of the module zip that I upload (that has a terrible smell about it)
When I deploy the module to Azure with New-AzureRmAutomationModule it uploads and processes just fine. The Extracting activities... step works and gives no errors.
When I compile a configuration, however, the compilation process fails with the Unable to find type error I mentioned.
I have contemplated adding an Import-Module Azure.Storage above the class declaration, but I've never seen that done anywhere else before.
Question 2 Is there a way I can compile locally using a similar process to the one used by Azure DSC so I can test changes more quickly?
Question 3 Does anyone know what is going wrong here?
Question 1/3:
If you create classes in powershell and use other classes within, ensure that these classes are present BEFORE loading the scriptfile that contains your new class.
I.e.:
Loader.ps1:
Import-Module Azure.Storage
. .\MyDSC-Class.ps1
Powershell checks if it finds all types you refer while interpreting the script, so all types must be loaded before that happens. You can do this by creating a scriptfile that loads all dependencies first and loads your script after that.
For question 2, if you register your machine as a hybrid worker you'll be able to run the script faster and compile locally. (For more details on hybrid workers, https://azure.microsoft.com/en-us/documentation/articles/automation-hybrid-runbook-worker/).
If you want an easy way to register the hybrid worker, you can run this script on your local machine (https://github.com/azureautomation/runbooks/blob/master/Utility/ARM/New-OnPremiseHybridWorker.ps1). Just make sure you have WMF 5 installed on your machine beforehand.
For authoring DSC configurations and testing locally, I would look at the Azure Automation ISE Add-On available on https://www.powershellgallery.com/packages/AzureAutomationAuthoringToolkit/0.2.3.6 You can install it by running the below command from an Administrator PowerShell ISE window.
Install-Module AzureAutomationAuthoringToolkit -Scope CurrentUser
For loading libraries, I have also noticed that I need to call import-module in order to be able to call methods. I need to do some research to determine the requirement for this. You can see an example I wrote to copy files from Azure Storage using a storage key up on https://github.com/azureautomation/modules/tree/master/cAzureStorage
As you probably don't want to have to deploy the storage library on all nodes, I included the storage library in the sample module above so that it will be automatically distributed to all nodes by the automation service.
Hope this helps,
Eamon
I am using Carbon's Powershell module for some work. When I move the folder to a different machine, the scripts within are flagged and blocked from being executed until I unblock them (which is fine). When I execute the following:
gci .\Carbon -Recurse | Unblock-File
I am still unable to import the module until I create a new Powershell session. The files are definitely unblocked at this point, but I continue to receive the same error until that new session has been created.
I've read over some technet articles and they state that you just need to close and open Powershell to resolve it, but no reasoning as to why this needs to occur.
This actually goes back to the .Net framework on which PowerShell is based. You're essentially loading a new assembly into the process. A blocked file is considered a "remote" file and by default .net is not set to load them.
How the Runtime Locates Assemblies
Checks whether the assembly name has been bound to before and, if so, uses the previously loaded assembly.
The thing is, this step caches "negative" loading as well (at least in my experience, from trying to load other assemblies). .Net doesn't have a way unload assemblies once they're loaded, so you have no other choice than to restart the process.
I have a powershell module I am writing and one thing I was curious of that is a bit unclear is the ScriptsToProcess Key. Can or should I use this to verify certain things like OS Type, Bit Type, or the presence or lack of certain environment variables to throw warnings if some of my functions may rely on some of the requirements above?
You can use ScriptsToProcess to execute script in the caller's session state (create variables, etc) or to use Write-Warning to notify the user the prereqs haven't been met. However, even if you throw an error, the module is still loaded. So if your intent is to prevent loading of the module I would put the warnings/throws in the startup script of your ModuleToProcess/RootModule. The module will show up in the Get-Module list but there shouldn't be any exported commands. Also if you want to check bitness, use the module manifest ProcessorArchitecture key.