Difference between Clear-Variable and setting variable to NULL - powershell

I often use variables which are declared in the script scope to avoid problems with functions and their scopes. I am declaring these variables like this:
New-Variable -Name test -Option AllScope -Value $null
... or sometimes I switch existing variables like this to use them comprehensively:
$script:test = $test
When I want to clear them I either use this:
Clear-Variable test -Scope Script
... or I simply use this:
$test = $null
Is there a difference? What should I prefer and why?

From the get-Help:
The Clear-Variable cmdlet deletes the data stored in a variable, but it does not delete the variable. As a result,
the value of the variable is NULL (empty). If the variable has a specified data or object type, Clear-Variable
preserves the type of the object stored in the variable.
So Clear-Variable and $var=$null are nearly equivalents (with the exception of the typing which is retained). An exact equivalent would be to do $var=[mytype]$null.
You can test it yourself:
$p = "rrrr"
Test-Path variable:/p # => $true
$p = $null
Get-Member -InputObject $p # => error
$p = [string]$null
Get-Member -InputObject $p # => it is a string
And to answer what may be the next question: how to completely remove a variable (since an absent variable is different from a null-valued variable)? Simply do
rm variable:/p
Test-Path variable:/p => $false

To complement Marcanpilami's helpful answer:
Note: In order to remove (undefine) a variable altogether, use Remove-Variable <name> [-Scope <scope>].
Unless $test is defined with Set-Variable -Option AllScope,
Clear-Variable test -Scope Script
and
$test = $null
are NOT generally equivalent.
(With Set-Variable -Option AllScope they are, but then the -Scope argument becomes irrelevant, because then only one instance of the variable exists (conceptually), across all scopes.)
$test = $null - unless executed in the same scope as when variable test was originally created - will implicitly create a test variable in the current scope (and assign $null to it), and leave the original variable untouched. For more on variable scoping in PS, see this answer
Note that variable-assignment syntax offers scoping too, via a scope prefix, but it is limited to global, script, and local (the default): $global:test = $null, $script:test = $null, $local:test = $null
There's also the private scope: a variation of local that prevents descendant scopes from seeing a variable - again, see this answer.
If you've ensured that you are targeting the same scope, the two forms above are functionally equivalent: they assign $null to the target variable.[1]
However, using Clear-Variable allows you to do two things that $<scope>:testing = ... doesn't:
the -Scope parameter also accepts a numeric value that indicates the scope relative to the current scope: 0 is the current scope, 1 is the parent scope, and so on.
you can target multiple variables (either as an array of names or using wildcards)
[1] Pitfall:
Note that if the target variable is type-constrained (was assigned with "cast notation"; e.g., [int] $i = 1), the type is retained - whether using $testing = $null or Clear-Variable - and an implicit type conversion may occur, which can have unexpected results or fail altogether:
[int] $i = 1 # type-constrain $i as an integer
Clear-Variable i # equivalent of $i = $null
$i # !! $i is now 0 (!), because [int] $null yields 0
[datetime] $d = 1 # type-constrain $d as DateTime
Clear-Variable d # !! FAILS, because `$d = $null` fails, given that
# !! $null cannot be converted to [datetime]

Related

Get-Variable defined in a scriptblock from a psm function

I have the following piece of code:
$x = 'xyz'
& {
$y = 'abc'
foo
}
The foo function is defined in the foo.psm1 module which is imported before the script block is started.
Inside the foo function, I call Get-Variable which shows me x but it doesn't show y. I tried playing with the -Scope parameter: Local, Script, Global, 0 - which is the local scope from what I understood from the docs, 1 - which is the parent scope.
How could I get the y variable inside the foo function?
I'm not looking for a solution such as passing it as an argument. I want something as Get-Variable but sadly it doesn't see it for some reason.
UP
Based on the comments received, probably more context is needed.
Say that foo receives a ScriptBlock which is using the $using: syntax.
$x = 'xyz'
& {
$y = 'abc'
foo -ScriptBlock {
Write-Host $using:x
Write-Host $using:y
}
}
I'm 'mining' these variables as follows:
$usingAsts = $ScriptBlock.Ast.FindAll( { param($ast) $ast -is [System.Management.Automation.Language.UsingExpressionAst] }, $true) | ForEach-Object { $_ -as [System.Management.Automation.Language.UsingExpressionAst] }
foreach ($usingAst in $usingAsts) {
$varAst = $usingAst.SubExpression -as [System.Management.Automation.Language.VariableExpressionAst]
$var = Get-Variable -Name $varAst.VariablePath.UserPath -ErrorAction SilentlyContinue
}
This is how I'm using Get-Variable and in the case presented above, y cannot be found.
Modules run in their own scope domain (aka session state), which means they generally do not see the caller's variables - unless (a module-external) caller runs directly in the global scope.
For an overview of scopes in PowerShell, see the bottom section of this answer.
However, assuming that you define the function in your module as an advanced one, there is a way to access the caller's state, namely via the automatic $PSCmdlet variable.
Here's a simplified example, using a dynamic module created via the New-Module cmdlet:
# Create a dynamic module that defines function 'foo'
$null = New-Module {
function foo {
# Make the function and advanced (cmdlet-like) one, via
# [CmdletBinding()].
[CmdletBinding()] param()
# Access the value of variable $bar in the
# (module-external) caller's scope.
# To get the variable *object*, use:
# $PSCmdlet.SessionState.PSVariable.Get('bar')
$PSCmdlet.GetVariableValue('bar')
}
}
& {
$bar = 'abc'
foo
}
The above outputs verbatim abc, as desired.

Tab-complete a parameter value based on another parameter's already specified value

This question addresses the following scenario:
Can custom tab-completion for a given command dynamically determine completions based on the value previously passed to another parameter on the same command line, using either a parameter-level [ArgumentCompleter()] attribute or the Register-ArgumentCompleter cmdlet?
If so, what are the limitations of this approach?
Example scenario:
A hypothetical Get-Property command has an -Object parameter that accepts an object of any type, and a -Property parameter that accepts the name of a property whose value to extract from the object.
Now, in the course of typing a Get-Property call, if a value is already specified for -Object, tab-completing -Property should cycle through the names of the specified object's (public) properties.
$obj = [pscustomobject] #{ foo = 1; bar = 2; baz = 3 }
Get-Property -Object $obj -Property # <- pressing <tab> here should cycle
# through 'foo', 'bar', 'baz'
#mklement0, regarding first limitation stated in your answer
The custom-completion script block ({ ... }) invoked by PowerShell fundamentally only sees values specified via parameters, not via the pipeline.
I struggled with this, and after some stubbornness I got a working solution.
At least good enough for my tooling, and I hope it can make life easier for many others out there.
This solution has been verified to work with PowerShell versions 5.1 and 7.1.2.
Here I made use of $cmdAst (called $commandAst in the docs), which contains information about the pipeline. With this we can get to know the previous pipeline element and even differentiate between it containing only a variable or a command. Yes, A COMMAND, which with help of Get-Command and the command's OutputType() member method, we can get (suggested) property names for such as well!
Example usage
PS> $obj = [pscustomobject] #{ foo = 1; bar = 2; baz = 3 }
PS> $obj | Get-Property -Property # <tab>: bar, baz, foo
PS> "la", "na", "le" | Select-String "a" | Get-Property -Property # <tab>: Chars, Context, Filename, ...
PS> 2,5,2,2,6,3 | group | Get-Property -Property # <tab>: Count, Values, Group, ...
Function code
Note that apart from now using $cmdAst, I also added [Parameter(ValueFromPipeline=$true)] so we actually pick the object, and PROCESS {$Object.$Property} so that one can test and see the code actually working.
param(
[Parameter(ValueFromPipeline=$true)]
[object] $Object,
[ArgumentCompleter({
param($cmdName, $paramName, $wordToComplete, $cmdAst, $preBoundParameters)
# Find out if we have pipeline input.
$pipelineElements = $cmdAst.Parent.PipelineElements
$thisPipelineElementAsString = $cmdAst.Extent.Text
$thisPipelinePosition = [array]::IndexOf($pipelineElements.Extent.Text, $thisPipelineElementAsString)
$hasPipelineInput = $thisPipelinePosition -ne 0
$possibleArguments = #()
if ($hasPipelineInput) {
# If we are in a pipeline, find out if the previous pipeline element is a variable or a command.
$previousPipelineElement = $pipelineElements[$thisPipelinePosition - 1]
$pipelineInputVariable = $previousPipelineElement.Expression.VariablePath.UserPath
if (-not [string]::IsNullOrEmpty($pipelineInputVariable)) {
# If previous pipeline element is a variable, get the object.
# Note that it can be a non-existent variable. In such case we simply get nothing.
$detectedInputObject = Get-Variable |
Where-Object {$_.Name -eq $pipelineInputVariable} |
ForEach-Object Value
} else {
$pipelineInputCommand = $previousPipelineElement.CommandElements[0].Value
if (-not [string]::IsNullOrEmpty($pipelineInputCommand)) {
# If previous pipeline element is a command, check if it exists as a command.
$possibleArguments += Get-Command -CommandType All |
Where-Object Name -Match "^$pipelineInputCommand$" |
# Collect properties for each documented output type.
ForEach-Object {$_.OutputType.Type} | ForEach-Object GetProperties |
# Group properties by Name to get unique ones, and sort them by
# the most frequent Name first. The sorting is a perk.
# A command can have multiple output types. If so, we might now
# have multiple properties with identical Name.
Group-Object Name -NoElement | Sort-Object Count -Descending |
ForEach-Object Name
}
}
} elseif ($preBoundParameters.ContainsKey("Object")) {
# If not in pipeline, but object has been given, get the object.
$detectedInputObject = $preBoundParameters["Object"]
}
if ($null -ne $detectedInputObject) {
# The input object might be an array of objects, if so, select the first one.
# We (at least I) are not interested in array properties, but the object element's properties.
if ($detectedInputObject -is [array]) {
$sampleInputObject = $detectedInputObject[0]
} else {
$sampleInputObject = $detectedInputObject
}
# Collect property names.
$possibleArguments += $sampleInputObject | Get-Member -MemberType Properties | ForEach-Object Name
}
# Refering to about_Functions_Argument_Completion documentation.
# The ArgumentCompleter script block must unroll the values using the pipeline,
# such as ForEach-Object, Where-Object, or another suitable method.
# Returning an array of values causes PowerShell to treat the entire array as one tab completion value.
$possibleArguments | Where-Object {$_ -like "$wordToComplete*"}
})]
[string] $Property
)
PROCESS {$Object.$Property}
Update: See betoz's helpful answer for a more complete solution that also supports pipeline input.
The part of the answer below that clarifies the limitations of pre-execution detection of the input objects' data type still applies.
The following solution uses a parameter-specific [ArgumentCompleter()] attribute as part of the definition of the Get-Property function itself, but the solution analogously applies to separately defining custom-completion logic via the Register-CommandCompleter cmdlet.
Limitations:
[See betoz's answer for how to overcome this limitation] The custom-completion script block ({ ... }) invoked by PowerShell fundamentally only sees values specified via parameters, not via the pipeline.
That is, if you type Get-Property -Object $obj -Property <tab>, the script block can determine that the value of $obj is to be bound to the -Object parameter, but that wouldn't work with
$obj | Get-Property -Property <tab> (even if -Object is declared as pipeline-binding).
Fundamentally, only values that can be evaluated without side effects are actually accessible in the script block; in concrete terms, this means:
Literal values (e.g., -Object ([pscustomobject] #{ foo = 1; bar = 2; baz = 3 })
Simple variable references (e.g., -Object $obj) or property-access or index-access expressions (e.g., -Object $obj.Foo or -Object $obj[0])
Notably, the following values are not accessible:
Method-call results (e.g., -Object $object.Foo())
Command output (via (...), $(...), or #(...), e.g.
-Object (Invoke-RestMethod http://example.org))
The reason for this limitation is that evaluating such values before actually submitting the command could have undesirable side effects and / or could take a long time to complete.
function Get-Property {
param(
[object] $Object,
[ArgumentCompleter({
# A fixed list of parameters is passed to an argument-completer script block.
# Here, only two are of interest:
# * $wordToComplete:
# The part of the value that the user has typed so far, if any.
# * $preBoundParameters (called $fakeBoundParameters
# in the docs):
# A hashtable of those (future) parameter values specified so
# far that are side effect-free (see above).
param($cmdName, $paramName, $wordToComplete, $cmdAst, $preBoundParameters)
# Was a side effect-free value specified for -Object?
if ($obj = $preBoundParameters['Object']) {
# Get all property names of the objects and filter them
# by the partial value already typed, if any,
# interpreted as a name prefix.
#($obj.psobject.Properties.Name) -like "$wordToComplete*"
}
})]
[string] $Property
)
# ...
}

Why does the scope of variables change depending on if it's a .ps1 or .psm1 file, and how can this be mitigated?

I have a function that executes a script block. For convenience, the script block does not need to have explicitly defined parameters, but instead can use $_ and $A to refer to the inputs.
In the code, this is done as such:
$_ = $Value
$A = $Value2
& $ScriptBlock
This whole thing is wrapped in a function. Minimal example:
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
[Object]$Value2
)
$_ = $Value
$A = $Value2
& $ScriptBlock
}
If this function is written in a PowerShell script file (.ps1), but imported using Import-Module, the behaviour of F is as expected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
15
PS>
However, when the function is written in a PowerShell module file (.psm1) and imported using Import-Module, the behaviour is unexpected:
PS> F -Value 7 -Value2 1 -ScriptBlock {$_ * 2 + $A}
PS>
Using {$_ + 1} instead gives 1. It seems that $_ has a value of $null instead. Presumably, some security measure restricts the scope of the $_ variable or otherwise protects it. Or, possibly, the $_ variable is assigned by some automatic process. Regardless, if only the $_ variable was affected, the first unsuccessful example would return 1.
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
In conclusion, the questions are:
Why can't script blocks in module files access variables defined in functions from which they were called?
Is there a method for explicitly specifying the variables accessible by a script block when invoking it?
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Out of order:
Is there some other way of solving this that does not involve including an explicit parameter declaration in the script block?
Yes, if you just want to populate $_, use ForEach-Object!
ForEach-Object executes in the caller's local scope, which helps you work around the issue - except you won't have to, because it also automatically binds input to $_/$PSItem:
# this will work both in module-exported commands and standalone functions
function F {
param(
[ScriptBlock]$ScriptBlock,
[Object]$Value
)
ForEach-Object -InputObject $Value -Process $ScriptBlock
}
Now F will work as expected:
PS C:\> F -Value 7 -ScriptBlock {$_ * 2}
Ideally, the solution would involve the ability to explicitly specify the environment in which a script block is run. Something like:
Invoke-ScriptBlock -Variables #{"_" = $Value; "A" = $Value2} -InputObject $ScriptBlock
Execute the scripblock using ScriptBlock.InvokeWithContext():
$functionsToDefine = #{
'Do-Stuff' = {
param($a,$b)
Write-Host "$a - $b"
}
}
$variablesToDefine = #(
[PSVariable]::new("var1", "one")
[PSVariable]::new("var2", "two")
)
$argumentList = #()
{Do-Stuff -a $var1 -b two}.InvokeWithContext($functionsToDefine, $variablesToDefine, $argumentList)
Or, wrapped in a function like your original example:
function F
{
param(
[scriptblock]$ScriptBlock
[object]$Value
)
$ScriptBlock.InvokeWithContext(#{},#([PSVariable]::new('_',$Value)),#())
}
Now you know how to solve your problem, let's get back to the question(s) about module scoping.
At first, it's worth noting that you could actually achieve the above using modules, but sort of in reverse.
(In the following, I use in-memory modules defined with New-Module, but the module scope resolution behavior describe is the same as when you import a script module from disk)
While module scoping "bypasses" normal scope resolution rules (see below for explanation), PowerShell actually supports the inverse - explicit execution in a specific module's scope.
Simply pass a module reference as the first argument to the & call operator, and PowerShell will treat the subsequent arguments as a command to be invoked in said module:
# Our non-module test function
$twoPlusTwo = { return $two + $two }
$two = 2
& $twoPlusTwo # yields 4
# let's try it with explicit module-scoped execution
$myEnv = New-Module {
$two = 2.5
}
& $myEnv $twoPlusTwo # Hell froze over, 2+2=5 (returns 5)
Why can't script blocks in module files access variables defined in functions from which they were called?
If they can, why can't the $_ automatic variable?
Because loaded modules maintain state, and the implementers of PowerShell wanted to isolate module state from the caller's environment.
Why might that be useful, and why might one preclude the other, you ask?
Consider the following example, a non-module function to test for odd numbers:
$two = 2
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
If we run the above statements in a script or an interactive prompt, subsequently invocating Test-IsOdd should yield the expected result:
PS C:\> Test-IsOdd 123
True
So far, so great, but relying on the non-local $two variable bears a flaw in this scenario - if, somewhere in our script or in the shell we accidentally reassign the local variable $two, we might break Test-IsOdd completely:
PS C:\> $two = 1 # oops!
PS C:\> Test-IsOdd 123
False
This is expected since, by default, variable scope resolution just wanders up the call stack until it reaches the global scope.
But sometimes you might require state to be kept across executions of one or more functions, like in our example above.
Modules solve this by following slightly different scope resolution rules - module-exported functions defer to something we call module scope (before reaching the global scope).
To illustrate how this solves our problem from before, considering this module-exported version of the same function:
$oddModule = New-Module {
function Test-IsOdd
{
param([int]$n)
return $n % $two -ne 0
}
$two = 2
}
Now, if we invoke our new module-exported Test-IsOdd, we predictably get the expected result, regardless of "contamination" in the callers scope:
PS C:\> Test-IsOdd 123
True
PS C:\> $two = 1
PS C:\> Test-IsOdd 123 # still works
True
This behavior, while maybe surprising, basicly serves to solidify the implicit contract between the module author and the user - the module author doesn't need to worry too much about what's "out there" (the callers session state), and the user can expect whatever going on "in there" (the loaded module's state) to work correctly without worrying about what they assign to variables in the local scope.
Module scoping behavior poorly documented in the help files, but is explained in some depth in chapter 8 of Bruce Payette's "PowerShell In Action" (ISBN:9781633430297)

Cannot modify a script-scoped variable from inside a function

I am currently making a script which is supposed to connect to 42 different local servers and getting the Users of a specific group (fjärrskrivbordsanvändare(Remote desktop users in swedish :D)) from active directory. After it has gotten all the users from the server it has to export the users to a file on MY desktop
The csv file has to look like this:
Company;Users
LawyerSweden;Mike
LawyerSweden;Jennifer
Stockholm Candymakers;Pedro
(Examples)
etc.
Here's the code as of now:
cls
$MolnGroup = 'fjärrskrivbordsanvändare'
$ActiveDirectory = 'activedirectory'
$script:CloudArray
Set-Variable -Name OutputAnvandare -Value ($null) -Scope Script
Set-Variable -Name OutputDomain -Value ($null) -Scope Script
function ReadInfo {
Write-Host("A")
Get-Variable -Exclude PWD,*Preference | Remove-Variable -EA 0
if (Test-Path "C:\file\frickin\path.txt") {
Write-Host("File found")
}else {
Write-Host("Error: File not found, filepath might be invalid.")
Exit
}
$filename = "C:\File\Freakin'\path\super.txt"
$Headers = "IPAddress", "Username", "Password", "Cloud"
$Importedcsv = Import-csv $filename -Delimiter ";" -Header $Headers
$PasswordsArray += #($Importedcsv.password)
$AddressArray = #($Importedcsv | ForEach-Object { $_.IPAddress } )
$UsernamesArray += #($Importedcsv.username)
$CloudArray += #($Importedcsv.cloud)
GetData
}
function GetData([int]$p) {
Write-Host("B")
for ($row = 1; $row -le $UsernamesArray.Length; $row++)
{
# (If the customer has cloud-service on server, proceed)
if($CloudArray[$row] -eq 1)
{
# Code below uses the information read in from a file to connect pc to server(s)
$secstr = New-Object -TypeName System.Security.SecureString
$PasswordsArray[$row].ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $UsernamesArray[$row], $secstr
# Runs command on server
$OutputAnvandare = Invoke-Command -computername $AddressArray[$row] -credential $cred -ScriptBlock {
Import-Module Activedirectory
foreach ($Anvandare in (Get-ADGroupMember fjärrskrivbordsanvändare))
{
$Anvandare.Name
}
}
$OutputDomain = Invoke-Command -computername $AddressArray[$row] -credential $cred -ScriptBlock {
Import-Module Activedirectory
foreach ($Anvandare in (Get-ADGroupMember fjärrskrivbordsanvändare))
{
gc env:UserDomain
}
}
$OutputDomain + $OutputAnvandare
}
}
}
function Export {
Write-Host("C")
# Variabler för att bygga up en CSV-fil genom Out-File
$filsökväg = "C:\my\file\path\Coolkids.csv"
$ColForetag = "Company"
$ColAnvandare = "Users"
$Emptyline = "`n"
$delimiter = ";"
for ($p = 1; $p -le $AA.Length; $p++) {
# writes out columns in the csv file
$ColForetag + $delimiter + $ColAnvandare | Out-File $filsökväg
# Writes out the domain name and the users
$OutputDomain + $delimiter + $OutputAnvandare | Out-File $filsökväg -Append
}
}
ReadInfo
Export
My problem is, I can't export the users or the domain. As you can see i tried to make the variables global to the whole script, but $outputanvandare and $outputdomain only contains the information i need inside of the foreach loop. If I try to print them out anywhere else, they're empty?!
This answer focuses on variable scoping, because it is the immediate cause of the problem.
However, it is worth mentioning that modifying variables across scopes is best avoided to begin with; instead, pass values via the success stream (or, less typically, via by-reference variables and parameters ([ref]).
To expound on PetSerAl's helpful comment on the question: The perhaps counter-intuitive thing about PowerShell variable scoping is that:
while you can see (read) variables from ancestral (higher-up) scopes (such as the parent scope) by referring to them by their mere name (e.g., $OutputDomain),
you cannot modify them by name only - to modify them you must explicitly refer to the scope that they were defined in.
Without scope qualification, assigning to a variable defined in an ancestral scope implicitly creates a new variable with the same name in the current scope.
Example that demonstrates the issue:
# Create empty script-level var.
Set-Variable -Scope Script -Name OutputDomain -Value 'original'
# This is the same as:
# $script:OutputDomain = 'original'
# Declare a function that reads and modifies $OutputDomain
function func {
# $OutputDomain from the script scope can be READ
# without scope qualification:
$OutputDomain # -> 'original'
# Try to modify $OutputDomain.
# !! Because $OutputDomain is ASSIGNED TO WITHOUT SCOPE QUALIFICATION
# !! a NEW variable in the scope of the FUNCTION is created, and that
# !! new variable goes out of scope when the function returns.
# !! The SCRIPT-LEVEL $OutputDomain is left UNTOUCHED.
$OutputDomain = 'new'
# !! Now that a local variable has been created, $OutputDomain refers to the LOCAL one.
# !! Without scope qualification, you cannot see the script-level variable
# !! anymore.
$OutputDomain # -> 'new'
}
# Invoke the function.
func
# Print the now current value of $OutputDomain at the script level:
$OutputDomain # !! -> 'original', because the script-level variable was never modified.
Solution:
There are several ways to add scope qualification to a variable reference:
Use a scope modifier, such as script in $script:OutputDomain.
In the case at hand, this is the simplest solution:
$script:OutputDomain = 'new'
Note that this only works with absolute scopes global, script, and local (the default).
A caveat re global variables: they are session-global, so a script assigning to a global variable could inadvertently modify a preexisting global variable, and, conversely, global variables created inside a script continue to exist after the script terminates.
Use Get/Set-Variable -Scope, which - in addition to supporting the absolute scope modifiers - supports relative scope references by 0-based index, where 0 represents the current scope, 1 the parent scope, and so on.
In the case at hand, since the script scope is the next higher scope,
Get-Variable -Scope 1 OutputDomain is the same as $script:OutputDomain, and
Set-Variable -Scope 1 OutputDomain 'new' equals $script:OutputDomain = 'new'.
(A rarely used alternative available inside functions and trap handlers is to use [ref], which allows modifying the variable in the most immediate ancestral scope in which it is defined: ([ref] $OutputDomain).Value = 'new', which, as PetSerAl points out in a comment, is the same as (Get-Variable OutputDomain).Value = 'new')
For more information, see:
Get-Help about_Variables
Get-Help about_Scopes
Finally, for the sake of completeness, Set-Variable -Option AllScope is a way to avoid having to use scope qualification at all (in all descendent scopes), because effectively then only a single variable by that name exists, which can be read and modified without scope qualification from any (descendent) scope.
# By defining $OutputDomain this way, all descendent scopes
# can both read and assign to $OutpuDomain without scope qualification
# (because the variable is effectively a singleton).
Set-Variable -Scope Script -Option AllScope -Name OutputDomain
However, I would not recommend it (at least not without adopting a naming convention), as it obscures the distinction between modifying local variables and all-scope variables:
in the absence of scope qualification, looking at a statement such as $OutputDomain = 'new' in isolation, you cannot tell if a local or an all-scope variable is being modified.
Since you've mentioned that you want to learn, I hope you'll pardon my answer, which is a bit longer than normal.
The issue that's impacting you here is PowerShell Variable Scoping. When you're commiting the values of $outputAvandare and $outputDomain, they only exist for as long as that function is running.
Function variables last until the function ends.
Script variables last until the script ends.
Session/global variables last until the session ends.
Environmental variable persist forever.
If you want to get the values out of them, you could make them Global variables instead, using this syntax:
$global:OutputAnvandare = blahblahblah
While that would be the easiest fix for your code, Global variables are frowned upon in PowerShell, since they subvert the normal PowerShell expectations of variable scopes.
Much better solution :)
Don't be dismayed, you're actually almost there with a really good solution that conforms to PowerShell design rules.
Today, your GetData function grabs the values that we want, but it only emits them to the console. You can see this in this line on GetData:
$OutputDomain + $OutputAnvandare
This is what we'd call emitting an object, or emiting data to the console. We need to STORE this data instead of just writing it. So instead of simply calling the function, as you do today, do this instead:
$Output = GetData
Then your function will run and grab all the AD Users, etc, and we'll grab the results and stuff them in $output. Then you can export the contents of $output later on.

When does Powershell honour default values when using $null splat parameters?

Consider the following function:
function f1{
param(
$sb = {},
$s = ''
)
if ($sb -isnot [scriptblock]) { 'scriptblock' }
if ($s -isnot [string] ) { 'string' }
}
Now invoke it with a splat parameter:
PS C:\> $splat = #{foo='bar'}
PS C:\> f1 #splat
As expected, nothing is returned. Now try it again with a $null splat parameter:
PS C:\> $splat = $null
PS C:\> f1 #splat
scriptblock
Oddly, scriptblock is returned. Clearly, at least for the [scriptblock] parameter, powershell is not honoring the default value when a $null splat parameter is used. But powershell does honor the default value for the [string]. What is going on here?
For what types does Powershell honour default values when using $null splat parameters?
Isn't this just normal application of positional parameters? You are splatting a single $null which is being applied to $sb.
Compare:
> function f{ param($sb = {}, $s = '') $PSBoundParameters }
> $splat = #(1,2)
> f #splat
Key Value
--- -----
sb 1
s 2
> f #flkejlkfja
Key Value
--- -----
sb
> function f{ param($aaa = 5, $sb = {}, $s = '') $PSBoundParameters }
> f #splat
Key Value
--- -----
aaa 1
sb 2
It's an old question but if it is still interesting...
As others have written with $splat = $null calling f1 #splat the first parameters will get the value $null instead it's default value.
If you want the parameters use their default value in this case you have to use $splat = #{} or $splat = #().
Here's a demonstration to help understand what's happening
$splat = #{foo='bar'}
"$(&{$args}#splat)"
-foo: bar
When you splat the hash table, it gets converted to -Key: Value string pairs that become the parameters to your function.
Now try:
$splat = $null
"$(&{$args}#splat)"
Nothing is returned. There are no keys to generate the parameter string from, so the end result is the same as not passing any parameters at all.
To complement Etan Reisner's helpful answer with a more direct demonstration that splatting $null indeed passes $null as the first (and only) positional argument:
$splat = $null
& { [CmdletBinding(PositionalBinding=$False)] param($dummy) } #splat
The above yields the following error:
A positional parameter cannot be found that accepts argument '$null'.
...
Decorating the param() block with [CmdletBinding(PositionalBinding=$False)] ensures that only named parameter values can be passed, causing the positional passing of $null from splatting to trigger the error above.
Note that using the special "null collection" value ([System.Management.Automation.Internal.AutomationNull]::Value) that you get from commands that produce no output for splatting is effectively the same as splatting $null, because that "null collection" value is converted to $null during parameter binding.
VargaJoe's helpful answer explains how to construct a variable for splatting so that no arguments are passed, so that the callee's default parameter values are honored.