How to perform Import-Module in powershell script used in File Server Resource Manager Classification script - powershell

I am trying to use the active directory PowerShell module inside a classification rule in File server resource manager on windows server 2012 R2.
When I try to just perform:
Import-Module ActiveDirectory
It will crash (I assume) and not update the classification property anymore.
I tried setting the script parameter -ExecutionPolicy Unrestricted, but that didn't help.
Anyone know how to get it to work?
non working code:
# Global variables available:
# $ModuleDefinition (IFsrmPipelineModuleDefinition)
# $Rule (IFsrmClassificationRule)
# $PropertyDefinition (IFsrmPropertyDefinition)
#
# And (optionally) any parameters you provide in the Script parameters box below,
# i.e. "$a = 1; $b = 2" . The string you enter is treated as a script and executed so the
# variables you define become globally available
# optional function to specify when the behavior of this script was last modified
# if it consumes additional files. emit one value of type DateTime
#
# function LastModified
# {
# }
# required function that outputs a value to be assigned to the specified property for each file classified
# emitting no value is allowed, which causes no value to be assigned for the property
# emitting more than one value will result in errors during classification
# begin and end are optional; process is required
#
function GetPropertyValueToApply
{
# this parameter is of type IFsrmPropertyBag
# it also has an additional method, GetStream, which returns a IO.Stream object to use for
# reading the contents of the file. Make sure to close the stream after you are done reading
# from the file
param
(
[Parameter(Position = 0)] $PropertyBag
)
process
{
Import-Module activedirectory
$users = Get-ADUser -filter * -Properties SID,Department
return "dummy result";
}
}
As note: This works perfectly fine in a PowerShell console; that isn't the issue. It's running the code as classifier for the file server resource manager.
Worked my way around it now by just creating a CSV file with the result of the Get-ADUser and loading that inside the script for now (so I don't require any non standard modules). But it would be nicer to just run this without a dependency on some external task.

The classification script is executed from a the File Server Resource Manager Service (not from the UI you are looking at), which is running under the system account.
So you either need to modify under which account the service is running or give the account rights to access the objects you require to access. In my case Active Directory.

Related

Custom variables in TFS Release Management

I'm using TFS 2017.1 Builds and Release feature.
In my release definition, I have a couple of release variables which I need to refer in my PowerShell task (execute on remote machine). So far, I've tried the following options but couldn't get it to work.
Added an Execute PowerShell task to store release variables into Environment variables:
$releaseVaraiables = Get-ChildItem Env: | Where-Object { $_.Name -like "ACL_*" }
Write-Host "##vso[task.setvariable variable=aclVariables]$releaseVaraiables"
Added an Execute PowerShell on remote machine task:
Here I can't read the Environment variables (maybe because this is remote machine task?)
Write-Verbose "problem reading $env:aclVariables" -Verbose
Then I tried passing the environment variable as an argument, but that didn't work either
param
(
$RbacAccessTokenParams
)
$RbacAccessTokenParams.GetEnumerator() | % {$_.Name}
$RbacAccessTokenParams | % {
Write-Verbose "variable is $_" -Verbose
Write-Verbose "name is $_.Name" -Verbose
Write-Verbose "value is $_.Value" -Verbose
}
This is how I passed as argument
-RbacAccessTokenParams $(aclVariables)
What I'm missing here?
I've tested your scenario on my side with TFS 2017.3.1, and it works when pass the environment variable as an argument. You can upgrade your TFS first and try again. Attach my steps for your reference:
1.
2.
3.
4.
Non-secret variables are already stored as environment variables; you do not need to do anything special to access them. You can access them with $ENV:VariableName. Periods are replaced with underscores. So Foo.Bar would be $env:FOO_BAR.
Secret variables should be passed in to the script that requires them.
However, this only applies on the agent. If you're using the PowerShell On Target Machines task to run a script, you need to pass the variables as arguments to the script. There is no way around this, unless you choose to use deployment groups.
Or, better yet, follow a configuration-as-code convention and store application-specific values in source controlled configuration files that your scripts read, so that you are not tightly coupled to your deployment orchestration platform.

PowerShell: Unable to find type when using PS 5 classes

I'm using classes in PS with WinSCP PowerShell Assembly. In one of the methods I'm using various types from WinSCP.
This works fine as long as I already have the assembly added - however, because of the way PowerShell reads the script when using classes (I assume?), an error is thrown before the assembly could be loaded.
In fact, even if I put a Write-Host at the top, it will not load.
Is there any way of forcing something to run before the rest of the file is parsed?
Transfer() {
$this.Logger = [Logger]::new()
try {
Add-Type -Path $this.Paths.WinSCP
$ConnectionType = $this.FtpSettings.Protocol.ToString()
$SessionOptions = New-Object WinSCP.SessionOptions -Property #{
Protocol = [WinSCP.Protocol]::$ConnectionType
HostName = $this.FtpSettings.Server
UserName = $this.FtpSettings.Username
Password = $this.FtpSettings.Password
}
Results in an error like this:
Protocol = [WinSCP.Protocol]::$ConnectionType
Unable to find type [WinSCP.Protocol].
But it doesn't matter where I load the assembly. Even if I put the Add-Type cmdlet on the topmost line with a direct path to WinSCPnet.dll, it won't load - it detects the missing types before running anything, it seems.
As you've discovered, PowerShell refuses to run scripts that contains class definitions that reference then-unavailable (not-yet-loaded) types - the script-parsing stage fails.
As of PSv5.1, even a using assembly statement at the top of a script does not help in this case, because in your case the type is referenced in the context of a PS class definition - this may get fixed in PowerShell Core, however; the required work, along with other class-related issues, is being tracked in GitHub issue #6652.
The proper solution is to create a script module (*.psm1) whose associated manifest (*.psd1) declares the assembly containing the referenced types a prerequisite, via the RequiredAssemblies key.
See alternative solution at the bottom if using modules is not an option.
Here's a simplified walk-through:
Create test module tm as follows:
Create module folder ./tm and manifest (*.psd1) in it:
# Create module folder (remove a preexisting ./tm folder if this fails).
$null = New-Item -Type Directory -ErrorAction Stop ./tm
# Create manifest file that declares the WinSCP assembly a prerequisite.
# Modify the path to the assembly as needed; you may specify a relative path, but
# note that the path must not contain variable references (e.g., $HOME).
New-ModuleManifest ./tm/tm.psd1 -RootModule tm.psm1 `
-RequiredAssemblies C:\path\to\WinSCPnet.dll
Create the script module file (*.psm1) in the module folder:
Create file ./tm/tm.psm1 with your class definition; e.g.:
class Foo {
# As a simple example, return the full name of the WinSCP type.
[string] Bar() {
return [WinSCP.Protocol].FullName
}
}
Note: In the real world, modules are usually placed in one of the standard locations defined in $env:PSMODULEPATH, so that the module can be referenced by name only, without needing to specify a (relative) path.
Use the module:
PS> using module ./tm; [Foo]::new().Bar()
WinSCP.Protocol
The using module statement imports the module and - unlike Import-Module -
also makes the class defined in the module available to the current session.
Since importing the module implicitly loaded the WinSCP assembly thanks to the RequiredAssemblies key in the module manifest, instantiating class Foo, which references the assembly's types, succeeded.
If you need to determine the path to the dependent assembly dynamically in order to load it or even to ad-hoc-compile one (in which case use of a RequiredAssemblies manifest entry isn't an option), you should be able to use the approach recommended in Justin Grote's helpful answer - i.e., to use a ScriptsToProcess manifest entry that points to a *.ps1 script that calls Add-Type to dynamically load dependent assemblies before the script module (*.psm1) is loaded - but this doesn't actually work as of PowerShell 7.2.0-preview.9: while the definition of the class in the *.psm1 file relying on the dependent assembly's types succeeds, the caller doesn't see the class until a script with a using module ./tm statement is executed a second time:
Create a sample module:
# Create module folder (remove a preexisting ./tm folder if this fails).
$null = New-Item -Type Directory -ErrorAction Stop ./tm
# Create a helper script that loads the dependent
# assembly.
# In this simple example, the assembly is created dynamically,
# with a type [demo.FooHelper]
#'
Add-Type #"
namespace demo {
public class FooHelper {
}
}
"#
'# > ./tm/loadAssemblies.ps1
# Create the root script module.
# Note how the [Foo] class definition references the
# [demo.FooHelper] type created in the loadAssemblies.ps1 script.
#'
class Foo {
# Simply return the full name of the dependent type.
[string] Bar() {
return [demo.FooHelper].FullName
}
}
'# > ./tm/tm.psm1
# Create the manifest file, designating loadAssemblies.ps1
# as the script to run (in the caller's scope) before the
# root module is parsed.
New-ModuleManifest ./tm/tm.psd1 -RootModule tm.psm1 -ScriptsToProcess loadAssemblies.ps1
Now, still as of PowerShell 7.2.0-preview.9, trying to use the module's [Foo] class inexplicably succeeds only after calling using module ./tm twice - which you cannot do in a single script, rendering this approach useless for now:
# As of PowerShell 7.2.0-preview.9:
# !! First attempt FAILS:
PS> using module ./tm; [Foo]::new().Bar()
InvalidOperation: Unable to find type [Foo]
# Second attempt: OK
PS> using module ./tm; [Foo]::new().Bar()
demo.FooHelper
The problem is a known one, as it turns out, and dates back to 2017 - see GitHub issue #2962
If your use case doesn't allow the use of modules:
In a pinch, you can use Invoke-Expression, but note that it's generally better to avoid Invoke-Expression in the interest of robustness and so as to avoid security risks[1]
.
# Adjust this path as needed.
Add-Type -LiteralPath C:\path\to\WinSCPnet.dll
# By placing the class definition in a string that is invoked at *runtime*
# via Invoke-Expression, *after* the WinSCP assembly has been loaded, the
# class definition succeeds.
Invoke-Expression #'
class Foo {
# Simply return the full name of the WinSCP type.
[string] Bar() {
return [WinSCP.Protocol].FullName
}
}
'#
[Foo]::new().Bar()
Alternatively, use a two-script approach:
A main script that loads the dependent assemblies,
which then dot-sources a second script that contains the class definitions relying on the types from the dependent assemblies.
This approach is demonstrated in Takophiliac's helpful answer.
[1] It's not a concern in this case, but generally, given that Invoke-Expression can invoke any command stored in a string, applying it to strings not fully under your control can result in the execution of malicious commands - see this answer for more information.
This caveat applies to other language analogously, such as to Bash's built-in eval command.
An additional solution is to put your Add-Type logic into a separate .ps1 file (name it AssemblyBootStrap.ps1 or something) and then add it to the ScriptsToProcess section of your module manifest. ScriptsToProcess runs before the root script module (*.psm1), and the assemblies will be loaded at the time the class definitions are looking for them.
Although it's not the solution per se, I worked around it. However, I'll leave the question open as it still stands
Instead of using WinSCP-types, I just use strings. Seeing as I already have enumerals that are identical to WinSCP.Protocol
Enum Protocols {
Sftp
Ftp
Ftps
}
And have set Protocol in FtpSettings
$FtpSettings.Protocol = [Protocols]::Sftp
I can set the protocol like this
$SessionOptions = New-Object WinSCP.SessionOptions -Property #{
Protocol = $this.FtpSettings.Protocol.ToString()
HostName = $this.FtpSettings.Server
UserName = $this.FtpSettings.Username
Password = $this.FtpSettings.Password
}
I used similar on [WinSCP.TransferMode]
$TransferOptions.TransferMode = "Binary" #[WinSCP.TransferMode]::Binary
First, I would recommend mklement0's answer.
However, there is a bit of running around you can do to get much the same effect with a bit less work, which can be helpful in smaller projects or in the early stages.
It's possible to merely . source another ps1 file in your code which contains your classes referencing a not yet loaded library after you load the referenced assembly.
##########
MyClasses.ps1
Class myClass
{
[3rdParty.Fancy.Object] $MyFancyObject
}
Then you can call your custom class library from your main script with a .
#######
MyMainScriptFile.ps1
#Load fancy object's library
Import-Module Fancy.Module #If it's in a module
Add-Type -Path "c:\Path\To\FancyLibrary.dll" #if it's in a dll you have to reference
. C:\Path\to\MyClasses.ps1
The original parsing will pass muster, the script will start, your reference will be added, and then as the script continues, the . sourced file will be read and parsed, adding your custom classes without issue as their reference library is in memory by the time the code is parsed.
It's still very much better to make and use a module with the proper manifest, but this will get by easy and is very easy to remember and use.
Add-Type -LiteralPath C:\Windows\Microsoft.NET\Framework\v4.0.30319\System.IO.Compression.FileSystem.dll
$psclasses = GC "C:\Windows\Temp\foobarclass.ps1" -Raw
Invoke-Expression $psclasses
[Foo]::new().Bar()

Powershell module abort loading on unsupported hosts

Glossary:
Host: PowershellHost session
Interactive: [Environment]::UserInteractive -eq $True
Scenario:
Create a powershell module that only will abort propertly and without error on failed condition. In this case, some commands/modules only work properly in full interactive hosts, like ISE and Console, but not in fake interactive hosts like NuGet Package Manager Console
Failed Solution:
# Add value to Powershell manifest(psd1)
# Issue: Only supports a string for the `PowerShellHostName` property. How to specify both `ConsoleHost` and `Windows PowerShell ISE Host`? Unknown if this property supports regex, and even if it does, will the behavior change since it's not documented?
#{
....
# Name of the Windows PowerShell host required by this module
# PowerShellHostName = ''
....
}
Failed Solution:
# Check for interactive shell
# Issue: UserInteractive is still set in embedded shells like NuGet package manager
# console. Commands that expect user input directly often hang.
if([Environment]::UserInteractive) {
# Do stuff, dotsource, etc
}
Failed Solution:
# Issue: returning still leaves module loaded, and it appears in Get-Module list
# Even if value is supplied for return, powershell's return statement is 'special'
# and the value is ignored
if($Host.Name -inotmatch '(ConsoleHost|Windows PowerShell ISE Host)') {
Write-Warning "Host [$($Host.Name)] not supported, aborting"
return
}
Failed Solution:
# Issue: Module isn't loaded, so it can't be unloaded
if( $Host.Name -inotmatch '(ConsoleHost|Windows PowerShell ISE Host)' ) {
Remove-Module ThisModuleName
}
Failed Solution:
# Issue: Powershell module error output is just passthrough, import-module
# still reports success, even though $Error is has more stuff than before
if( $Host.Name -inotmatch '(ConsoleHost|Windows PowerShell ISE Host)' ) {
Write-Error "Unsupported Host:" $Host.Name
}
Annoying solution:
# Issue: Leaves two errors on the stack, one for the throw, one for the module not
# loading successfully
if($Host.Name -inotmatch '(ConsoleHost|Windows PowerShell ISE Host)') {
throw "Host [$($Host.Name)] not supported, aborting"
}
Not a solution:
Force user to wrap the import every time.
Questionable Solution:
Split module into nested submodules, one for 'Common', and one for each supported Host. Use subfolder for each, and duplicate psd1 for each. Looks like it will end up being a maintainability nightmare, especially with respect to nested dependencies.
UberModule
/ModuleCommon
/ModuleCommon.(psd1|psm1)
/ConsoleHostSpecific
/ConsoleHostSpecific.(psd1|psm1)
/IseHostSpecific
/IseHostSpecific.(psd1|psm1)
/etc...
Is there a better way to do this, or is the uber-module split the only way to go?
Take a look at the #requires keyword, it may offer a few options that you have not tried yet. I don't know if NuGet Package Manager Console has a unique ShellId or not.
#Requires –ShellId Microsoft.PowerShell
http://technet.microsoft.com/en-us/library/hh847765.aspx

How to pass output from a PowerShell cmdlet to a script?

I'm attempting to run a PowerShell script with the input being the results of another PowerShell cmdlet. Here's the cross-forest Exchange 2013 PowerShell command I can run successfully for one user by specifying the -Identity parameter:
.\Prepare-MoveRequest.ps1 -Identity "user#domain.com" -RemoteForestDomainController "dc.remotedomain.com" $Remote -UseLocalObject -OverwriteLocalObject -Verbose
I want to run this command for all MailUsers. Therefore, what I want to run is:
Get-MailUser | select windowsemailaddress | .\Prepare-MoveRequest.ps1 -RemoteForestDomainController "dc.remotedomain.com" $Remote -LocalForestDomainController "dc.localdomain.com" -UseLocalObject -OverwriteLocalObject -Verbose
Note that I removed the -Identity parameter because I was feeding it from each Get-MailUser's WindowsEmailAddress property value. However, this returns with a pipeline input error.
I also tried exporting the WindowsEmailAddress property values to a CSV, and then reading it as per the following site, but I also got a pipeline problem: http://technet.microsoft.com/en-us/library/ee861103(v=exchg.150).aspx
Import-Csv mailusers.csv | Prepare-MoveRequest.ps1 -RemoteForestDomainController DC.remotedomain.com -RemoteForestCredential $Remote
What is the best way to feed the windowsemailaddress field from each MailUser to my Prepare-MoveRequest.ps1 script?
EDIT: I may have just figured it out with the following foreach addition to my Import-Csv option above. I'm testing it now:
Import-Csv mailusers.csv | foreach { Prepare-MoveRequest.ps1 -Identity $_.windowsemailaddress -RemoteForestDomainController DC.remotedomain.com -RemoteForestCredential $Remote }
You should declare your custom function called Prepare-MoveRequest instead of simply making it a script. Then, dot-source the script that declares the function, and then call the function. To accept pipeline input into your function, you need to declare one or more parameters that use the appropriate parameter attributes, such as ValueFromPipeline or ValueFromPipelineByPropertyName. Here is the official MSDN documentation for parameter attributes.
For example, let's say I was developing a custom Stop-Process cmdlet. I want to stop a process based on the ProcessID (or PID) of a Windows process. Here is what the command would look like:
function Stop-CustomProcess {
# Specify the CmdletBinding() attribute for our
# custom advanced function.
[CmdletBinding()]
# Specify the PARAM block, and declare the parameter
# that accepts pipeline input
param (
[Parameter(ValueFromPipelineByPropertyName = $true)]
[int] $Id
)
# You must specify the PROCESS block, because we want this
# code to execute FOR EACH process that is piped into the
# cmdlet. If we do not specify the PROCESS block, then the
# END block is used by default, which only would run once.
process {
Write-Verbose -Message ('Stopping process with PID: {0}' -f $ID);
# Stop the process here
}
}
# 1. Launch three (3) instances of notepad
1..3 | % { notepad; };
# 2. Call the Stop-CustomProcess cmdlet, using pipeline input
Get-Process notepad | Stop-CustomProcess -Verbose;
# 3. Do an actual clean-up
Get-Process notepad | Stop-Process;
Now that we've taken a look at an example of building the custom function ... once you've defined your custom function in your script file, dot-source it in your "main" script.
# Import the custom function into the current session
. $PSScriptRoot\Prepare-MoveRequest.ps1
# Call the function
Get-MailUser | Prepare-MoveRequest -RemoteForestDomainController dc.remotedomain.com $Remote -LocalForestDomainController dc.localdomain.com -UseLocalObject -OverwriteLocalObject -Verbose;
# Note: Since you've defined a parameter named `-WindowsEmailAddress` that uses the `ValueFromPipelineByPropertyName` attribute, the value of each object will be bound to the parameter, as it passes through the `PROCESS` block.
EDIT: I would like to point out that your edit to your post does not properly handle parameter binding in PowerShell. It may achieve the desired results, but it does not teach the correct method of binding parameters in PowerShell. You don't have to use the ForEach-Object to achieve your desired results. Read through my post, and I believe you will increase your understanding of parameter binding.
My foreach loop did the trick.
Import-Csv mailusers.csv | foreach { Prepare-MoveRequest.ps1 -Identity $_.windowsemailaddress -RemoteForestDomainController DC.remotedomain.com -RemoteForestCredential $Remote }

How can I tell if I'm in a remote PowerShell session?

I would like to execute code that is specific to remote PSSessions. That is, the code doesn't apply locally, but applies to all remote sessions.
Is there any environment variable, function or cmdlet that would effectively return true if I'm in an active PSSession and false if I'm running locally?
Check if the $PSSenderInfo variable exists. From about_Automatic_Variables:
$PSSenderInfo
Contains information about the user who started the PSSession,
including the user identity and the time zone of the originating
computer. This variable is available only in PSSessions.
The $PSSenderInfo variable includes a user-configurable property,
ApplicationArguments, which, by default, contains only the
$PSVersionTable from the originating session. To add data to the
ApplicationArguments property, use the ApplicationArguments parameter
of the New-PSSessionOption cmdlet.
You could also test using this:
If ( (Test-Path variable:PSSenderInfo)
        -and ($Null -ne $PSSenderInfo)
        -and ($PSSenderInfo.GetType().Name -eq 'PSSenderInfo') ) {
    Write-Host -Object "This script cannot be run within an active PSSession"
    Exit
}
This was not my initial finding, but it helped with "variable PSSenderInfo not set" issue, if it doesn't exist.