Unblock-File - Why do I need to open a new session? - powershell

I am using Carbon's Powershell module for some work. When I move the folder to a different machine, the scripts within are flagged and blocked from being executed until I unblock them (which is fine). When I execute the following:
gci .\Carbon -Recurse | Unblock-File
I am still unable to import the module until I create a new Powershell session. The files are definitely unblocked at this point, but I continue to receive the same error until that new session has been created.
I've read over some technet articles and they state that you just need to close and open Powershell to resolve it, but no reasoning as to why this needs to occur.

This actually goes back to the .Net framework on which PowerShell is based. You're essentially loading a new assembly into the process. A blocked file is considered a "remote" file and by default .net is not set to load them.
How the Runtime Locates Assemblies
Checks whether the assembly name has been bound to before and, if so, uses the previously loaded assembly.
The thing is, this step caches "negative" loading as well (at least in my experience, from trying to load other assemblies). .Net doesn't have a way unload assemblies once they're loaded, so you have no other choice than to restart the process.

Related

Possible to use commands provided by a recently installed program without restarting cmd/powershell?

I'm working on a script that automatically installs software. One of the programs to be installed includes its own command line commands and sub-commands when installed.
The goal is to use the program's provided commands to perform an action after its installation.
But running the command right after the program's installation I'm greeted by:
" is not recognized as an internal or external command , operable program or batch file"
If I open a new Powershell or cmd window the command is available in that instance.
What is the easiest way to to grant the script access to the commands?
Bender the Greatest's helpful answer explains the problem and shows you how to modify the $env:PATH variable in-session by manually appending a new directory path.
While that is a pragmatic solution, it requires that you know the specific directory path of the recently installed program.
If you don't - or you just want a generic solution that doesn't require you to hard-code paths - you can refresh the value of $env:PATH (the PATH environment variable) from the registry, via the [Environment]::GetEnvironmentVariable() .NET API method:
$env:PATH = [Environment]::GetEnvironmentVariable('Path', 'Machine'),
[Environment]::GetEnvironmentVariable('Path', 'User') -join ';'
This updates $env:PATH in-session to the same value that future sessions will see.
Note how the machine-level value (list of directories) takes precedence over the user-level one, due to coming first in the composite value.
Note:
If you happen to have made in-session-only $env:PATH modifications before calling the above, these modifications are lost.
If applicable, this includes modifications made by your $PROFILE file.
Hypothetically, other processes could have made additional modifications to the persistent Path variable definitions as well since your session started, which the call above will pick up too (as will future sessions).
This is because the PATH environment variable gets updated, but existing processes don't see that update unless they specifically query the registry for the live value of the update the PATH environment variable, then update PATH within its own process. If you need to continue in the same process, the workaround is to add the installation location to the PATH variable yourself after the program has been installed:
Note: I don't recommend updating the live value from the registry instead of the below in most cases. Other processes can modify that value, not just your own. It can introduce unnecessary risk, whereas appending only what you know should have changed is a more pragmatic approach. In addition, it adds code complexity for a case that often doesn't need to be generalized to that point.
# This will update the PATH variable for the current process
$env:PATH += ";C:\Path\To\New\Program\Folder;"

ImportPSModulesFromPath doesn't add anything to the Modules collection

I've got a Powershell module which I'm needing to load into a Runspace for use by different threads. I understand that defining a SessionState allows me to load modules in that can then be accessed by the runspace:
$SessionState = [System.Management.Automation.Runspaces.InitialSessionState]::Create()
$SessionState.ImportPSModulesFromPath("$filepath\Validation Library.psd1")
$runspacePool = [runspacefactory]::CreateRunspacePool($SessionState)
The problem is that I can't seem to get ImportPSModulesFromPath to do anything - it doesn't return any errors, yet the $SessionState.Modules collection is always empty, and my Runspace keeps returning errors saying it can't find the functions in the module, even though they're defined properly in the psd1 and do work if I load normally using Import-Module.
The psd1 file contains the module definition pointing to a psm1 file in the same folder (I get the same behaviour when pointing directly to the psm1)
After lots of searching and testing, I was unable to determine anything wrong with the script - until I closed and reopened my Powershell session.
Though I had been taking care to dispose and remove variables I thought were being used, I had been running the same scripts with the same variables repeatedly, and so something incorrect had been stored down into one of my variables, causing the module load to fail.
Running the same script in a new session showed that the module was loading successfully.

Unable to find type [CloudFileDirectory] error - dependent assemblies in a custom Azure DSC module

I have created a custom module as a PowerShell class following, roughly, the instructions available at Writing a custom DSC resource with PowerShell classes. The intent is to connect to Azure File Storage and download some files. I am using Azure Automation DSC as my pull server.
Let me start by saying that, when run through the PowerShell ISE, the code works a treat. Something goes wrong when I upload it to Azure though - I get the error Unable to find type [CloudFileDirectory]. This type specifier comes from assemblies referenced in through the module Azure.Storage which is definitely in my list of automation assets.
At the tippy top of my psm1 file I have
Using namespace Microsoft.WindowsAzure.Storage.File
[DscResource()]
class tAzureStorageFileSync
{
...
# Create the search context
[CloudFileDirectory] GetBlobRoot()
{
...
}
...
}
I'm not sure whether this Using is supported in this scenario or not, so let's call that Question 1
To date I have tried:
Adding RequiredModules = #( "Azure.Storage" ) to the psd1 file
Adding RequiredAssemblies = #( "Microsoft.WindowsAzure.Storage.dll" ) to the psd1 file
Shipping the actual Microsoft.WindowsAzure.Storage.dll file in the root of the module zip that I upload (that has a terrible smell about it)
When I deploy the module to Azure with New-AzureRmAutomationModule it uploads and processes just fine. The Extracting activities... step works and gives no errors.
When I compile a configuration, however, the compilation process fails with the Unable to find type error I mentioned.
I have contemplated adding an Import-Module Azure.Storage above the class declaration, but I've never seen that done anywhere else before.
Question 2 Is there a way I can compile locally using a similar process to the one used by Azure DSC so I can test changes more quickly?
Question 3 Does anyone know what is going wrong here?
Question 1/3:
If you create classes in powershell and use other classes within, ensure that these classes are present BEFORE loading the scriptfile that contains your new class.
I.e.:
Loader.ps1:
Import-Module Azure.Storage
. .\MyDSC-Class.ps1
Powershell checks if it finds all types you refer while interpreting the script, so all types must be loaded before that happens. You can do this by creating a scriptfile that loads all dependencies first and loads your script after that.
For question 2, if you register your machine as a hybrid worker you'll be able to run the script faster and compile locally. (For more details on hybrid workers, https://azure.microsoft.com/en-us/documentation/articles/automation-hybrid-runbook-worker/).
If you want an easy way to register the hybrid worker, you can run this script on your local machine (https://github.com/azureautomation/runbooks/blob/master/Utility/ARM/New-OnPremiseHybridWorker.ps1). Just make sure you have WMF 5 installed on your machine beforehand.
For authoring DSC configurations and testing locally, I would look at the Azure Automation ISE Add-On available on https://www.powershellgallery.com/packages/AzureAutomationAuthoringToolkit/0.2.3.6 You can install it by running the below command from an Administrator PowerShell ISE window.
Install-Module AzureAutomationAuthoringToolkit -Scope CurrentUser
For loading libraries, I have also noticed that I need to call import-module in order to be able to call methods. I need to do some research to determine the requirement for this. You can see an example I wrote to copy files from Azure Storage using a storage key up on https://github.com/azureautomation/modules/tree/master/cAzureStorage
As you probably don't want to have to deploy the storage library on all nodes, I included the storage library in the sample module above so that it will be automatically distributed to all nodes by the automation service.
Hope this helps,
Eamon

Azure Cloud Service Startup task that needs to run a PowerShell script

All,
Note: I have updated the question after some feedback.
Thanks to #jisaak for his help so far.
I have the need to run a PowerShell script that adds TCP bindings and some other stuff when I deploy my Cloud Service.
Here is my Cloud Service Project:
Here is my Cloud Service Project and Webrole project:
Here is my task in ServiceDefinition.csdef:
And here is the PowerShell script I want to run:
here is my attempt at the Startup.cmd:
When I deploy I get this in the Azure log:
And this in the powershell log:
Any help would be very much appreciated.
I think I am nearly there but following other people syntax on the web doesn't seem to get me there.
thanks
Russ
I think the issue is that the working directory of the batch command interpreter when it runs Startup.cmd runs is not as expected.
The Startup.cmd is located in the \approot\bin\Startup directory but the working directory is \approot\bin.
Therefore the command .\RoleStartup.ps1 is not able to find the RoleStartup.ps1 as it is looking in the bin directory not in the bin\Startup directory.
Solutions I know to this are:
Solution 1:
Use ..\Startup\RoleStartup.ps1 to call the RoleStartup.ps1 from Startup.cmd.
Soltuion 2:
Change the current working directory in Startup.cmd so that the relative path .\RoleStartup.ps1 is found. I do this by CHDIR %~dp0 (see here) to change into the directory that contains Startup.cmd.
Solution 3:
As Don Lockhart's answer suggested, do not copy the Startup directory to the output, instead leave it set as "Content" in the Visual Studio project. This means the files within it will exist in the \approot\Startup directory on the Azure instance. (You would then want to make sure that the Startup folder is not publically accessible via IIS!). Then update the reference to Startup.cmd in ServiceDefinition.csdef to ..\Startup\Startup.cmd, and update the reference to RoleStartup.ps1 in Startup.cmd to ..\Startup\RoleStartup.ps1. This works on the fact that the working directory is bin and uses ..\Startup to always locate the Startup directory relative to it.
You don't need to set the executionpolicy within your cmd - just call the script. Also, you should use a relative path because you can't rely that there is C disk.
Change your batch to:
powershell -executionpolicy unrestricted -file .\RoleStartup.ps1
Right click on the RoleStartup.ps1 and Startup.cmdin Visual Studio and ensure that the Copy to Output directory is set to copy always.
If this still doesn't work, remove the startup call in your csdef, deploy the service, rdp into it and try to invoke the script by yourself to retrieve any errors.
Edit:
Try to adopt your script as below:
Import-Module WebAdministration
$site = $null
do # gets the first website until the result is not $null
{
$site = Get-WebSite | select -first 1
Sleep 1
}
until ($site)
# get the appcmd path
$appcmd = Join-Path ([System.Environment]::GetFolderPath('System')) 'inetsrv\appcmd.exe'
# ensure the appcmd.exe is present
if (-not (Test-Path $appcmd))
{
throw "appcmd.exe not found in '$appcmd'"
}
# The rest of your script ....
I've found it easier in the past to not copy the content to the output directory. I have approot\bin as the working directory. My startUp task element's commandLine attribute uses a relative reference to the .cmd file like so:
The .cmd file references the PowerShell script relatively from the working directory as well:
PowerShell -ExecutionPolicy Unrestricted -f ..\StartUp\RoleStartup.ps1
Ok,
So I am coming back to this after many different attempts to make it work.
I have tried using:
Startup config in the ServiceDefinition.csdef
I have tried registering a scheduled task on the server that scans the Windows Azure log looking for [System[Provider[#Name='Windows Azure Runtime 2.6.0.0'] and EventID=10004]]
Nothing worked either due to security or the timing of events and IIS not being fully setup yet.
So I finally bit the bullet and used my Webrole.cs => public override bool OnStart() method:
Combined with this in the ServiceDefinition.csdef:
Now it all works. This was not the most satisfying result as some of the other ways to do it felt more elegant. Also, many others posted that they got the other ways of doing it to work. Maybe I would have got there eventually but my time was restricted.
thanks
Russ

clear powershell session

Is there a commandlet that clears the current powershell session variables that I have added?
I am using the Add-Type commandlet, and I am getting the error "Cannot add type. The type name already exists."
A possible "work around": Open a powershell window and then to run your script enter powershell .\yourScriptHere.ps1
This launches a new powershell instance which exits when your script exits. If you want to "play" in the new instance then change the invocation to powershell -NoExit .\yourScriptHere.ps1 and the new instance will not exit when the script completes. Enter exit when you need another restart and hit the "up arrow" key to get the previous command. All script output will appear in the same window. The overhead for starting a new powershell instance is low -- appears to be less than 1 second on my laptop.
Unfortunately you can't unload .NET assemblies that have been loaded into the default AppDomain which is what Add-Type does. You can rename types or namespaces to limp along but at some point you just have to exit and restart PowerShell.
This is not a PowerShell limitation so much as it is a .NET/CLR limitation. You can load .NET assemblies into separate AppDomains which can be unloaded later but you would have to code that yourself and it imposes restrictions on the types you plan to use in the separate AppDomain. That is, those types need to work through .NET Remoting so they either have to derive from MarshByRefObject or they have to be serializable (and this applies to all the objects referenced by their properties, and so on down the object graph).