I know I can type parameters to functions in PowerShell using:
Param (
[int]$myIntParam
);
And I know I can pass by reference like this:
Param (
[ref]$myRefParam
);
Is it possible to insist that the reference is to a particular type? For example, is it possible to have it be of type "reference to integer"? Like in C, I would to "pointer to integer" as "int*"... Is there something analogous in PowerShell?
I tried googling around but couldn't find any info on this.
There is no syntax to specify a "reference-to-type", because ref is its own type in Powershell and not a modifier of other types. However, you can use a script validator to get the same result.
function f {
param(
[ValidateScript({$_.Value.GetType() -eq [Int32]})]
[ref] $i
)
$i.value += 1
"New value is $($i.value)"
}
> $x = 5
> f ([ref]$x)
New value is 6
> $x
6
> $y = 'hello'
> f ([ref]$y)
Exception: Cannot validate argument on parameter 'i'.
Related
I am new to Powershell, but i am curious what is the best practice for creating default variable in Powershell. This is an example, which i am referencing. In case if you just want to initialize default variables, without intention to pass any parameters to function. Which is better way number 1 or 2 or none of them. :)
1.
function test
{
param ([int]$x = 5,[int]$y = 14)
$x * $y
}
2.
function test
{
[int]$x = 5
[int]$y = 14
$x * $y
}
It just depends on your use-case. If you truly never intend to change the variables, #2 is correct.
I think you just need to ask yourself what future use-cases might be. Would changing the values break your function? The ability to supply parameters is very useful, if not now, perhaps in the future.
Basically, if you're using the variables as FINAL, #2 is fine, but in all other cases I would say #1 is more correct.
If your intention is to take parameters to your function, use Param(). By default, undefined values will be $Null
Function Test
{
Param(
[Int]
$X = 5,
[Int]
$Y = 14
)
Return $X * $Y
}
Function Test2
{
$X=5; $Y=14
$X * $Y
}
> Test2
>> 70
> Test
>> 70
> Test 5 20
>> 100
Problem
When calling function f $array in code below (PowerShell v2), I am getting an error:
f : Cannot bind argument to parameter 'csvObject1' because it is null.
My Code
$hash1 = #{
dependent_name = 'ADDM-Fun-3';
}
$obj1 = New-Object -TypeName PSObject -Property $hash1
$array = [System.Collections.ArrayList]#( $obj1, $null )
function f () {
Param(
[parameter(Position=0, Mandatory=$true)]
[System.Collections.ArrayList]$array
)
"Hello"
}
f $array
Question
Why does Powershell do this? It seams to me to be a design flaw - but maybe I am not seeing the big picture.
Comments
I believe this error is occurring because of the second line in the ArrayList is $null. I am slightly shocked by this 'finding' because:
It has taken my about 4 hours to track down the issue.
This seams to imply that using strong type defintions in the function is a bad idea because it causes PowerShell to check every element in the array which is an unexepected overhead.
If I remove [System.Collections.ArrayList] from the function definition, the problem goes away.
You should use AllowNullAttribute as stated in about_Functions_Advanced_Parameters help topic
Param
(
[Parameter(Position=0, Mandatory=$true)]
[AllowNull()]
[System.Collections.ArrayList]$array
)
It is some kind of defensive programming. PS automatically unwraps arrays and hastables when pipelined. Think globally - you don't want empty server name in list of other server names when you pass bunch of them to function.
Not sure why PowerShell generates an exception as null is a valid member for ArrayList. However, you can force your parameter validation by allowing nulls.
Like this:
[parameter(Position=0, Mandatory=$true)][AllowNull()][System.Collections.ArrayList] $array
Then there is no exception generated.
I want to get some 10 values of type short from a .NET function.
In C# it works like this:
Int16[] values = new Int16[10];
Control1.ReadValues(values);
The C# syntax is ReadValues(short[] values).
I tried something like this:
$Control1.ReadValues([array][int16]$Result)
But there are only zeroes in the array.
In the comments you mention:
I believe that the C# function have a ref
So, the method signature is really:
ReadValues(ref short[] values)
Luckily, PowerShell has a [ref] type accelerator for this sort of situation
# Start by creating an array of Int16, length 10
$Result = [int16[]]#( ,0 * 10 )
# Pass the variable reference with the [ref] keyword
$Control1.ReadValues([ref]$Result)
For more inforation, see the about_Ref help file
I can store a data type in a variable like this
$type = [int]
and use it like this:
$type.GetType().Name
but, how can I embed it in another declaration? e.g.
[Func[$type]]
* Update I *
so the invoke-expression will do the trick (thanks Mike z). what I was trying to do is create a lambda expression. this is how I can do it now:
$exp = [System.Linq.Expressions.Expression]
$IR = [Neo4jClient.Cypher.ICypherResultItem]
Invoke-Expression "`$FuncType = [Func[$IR]]"
$ret = $exp::Lambda($FuncType, ...)
but also thanks to #PetSerAl and #Jan for interesting alternatives
This does not appear to be possible directly, at least according to the PowerShell 3.0 specification.
The [type] syntax is called a type-literal by the spec and its definition does not included any parts that can be expressions. It is composed of type-names which are composed of type-characters but there is nothing that is dynamic about them.
Reading through the spec, I noticed that something like this however works:
$type = [int]
$try = Read-Host
$type::"$(if ($try) { 'Try' } else { '' })Parse"
Now you might wonder why $type::$variable is allowed. That is because :: is an operator who's left hand side is an expression that must evaluate to a type. The right hand side is a member-name which allows simple names, string literals, and use of the subexpression operator.
However PowerShell is extremely resilient and you can do almost anything dynamicly via Invoke-Expression. Let's say you want to declare variable that is a generic delegate based on a type you know only at runtime:
$type = [int] # This could come from somewhere else entirely
Invoke-Expression "`$f = [Func[$type]]{ return 1 }"
Now $f has your delegate. You will need to test this out if $type is some complex nested or generic type but it should work for most basic types. I tested with [int] and [System.Collections.Generic.List[int]] it worked fine for both.
It can be achieved by reflection:
$type = [int]
$Func = [Func``1] # you have to use mangled name to get generic type definition.
$Func.MakeGenericType($type)
Unfortunately, I don't thing you can do this. Have a look at this question Is possible to cast a variable to a type stored in another variable?.
There is a suggestion that a conversion is possible using Convert.ChangeType method on objects that implement IConvertible, but as far as I can tell this is not implemented in PowerShell.
You can fake it a little bit, by using your stored type in a scriptblock, but this may not be what you are after.
$type = [byte]
$code = [scriptblock]::create("[$type]`$script:var = 10")
& $code
$var.gettype()
IsPublic IsSerial Name BaseType
-------- -------- ---- --------
True True Byte System.ValueType
I'm fairly new to powershell, and I'm just not getting how to modify a variable in a parent scope:
$val = 0
function foo()
{
$val = 10
}
foo
write "The number is: $val"
When I run it I get:
The number is: 0
I would like it to be 10. But powershell is creating a new variable that hides the one in the parent scope.
I've tried these, with no success (as per the documentation):
$script:$val = 10
$global:$val = 10
$script:$val = 10
But these don't even 'compile' so to speak.
What am I missing?
You don't need to use the global scope. A variable with the same name could have been already exist in the shell console and you may update it instead. Use the script scope modifier. When using a scope modifier you don't include the $ sign in the variable name.
$script:val=10
The parent scope can actually be modified directly with Set-Variable -Scope 1 without the need for Script or Global scope usage. Example:
$val = 0
function foo {
Set-Variable -scope 1 -Name "Val" -Value "10"
}
foo
write "The number is: $val"
Returns:
The number is: 10
More information can be found in the Microsoft Docs article About Scopes. The critical excerpt from that doc:
Note: For the cmdlets that use the Scope parameter, you can also refer to scopes by number. The number describes the relative position of one scope to another. Scope 0 represents the current, or local, scope. Scope 1 indicates the immediate parent scope. Scope 2 indicates the parent of the parent scope, and so on. Numbered scopes are useful if you have created many recursive scopes.
Be aware that recursive functions require the scope to be adjusted accordingly:
$val = ,0
function foo {
$b = $val.Count
Set-Variable -Name 'val' -Value ($val + ,$b) -Scope $b
if ($b -lt 10) {
foo
}
}
Let me point out a third alternative, even though the answer has already been made. If you want to change a variable, don't be afraid to pass it by reference and work with it that way.
$val=1
function bar ($lcl)
{
write "In bar(), `$lcl.Value starts as $($lcl.Value)"
$lcl.Value += 9
write "In bar(), `$lcl.Value ends as $($lcl.Value)"
}
$val
bar([REF]$val)
$val
That returns:
1
In bar(), $lcl.Value starts as 1
In bar(), $lcl.Value ends as 10
10
If you want to use this then you could do something like this:
$global:val=0
function foo()
{
$global:val=10
}
foo
write "The number is: $val"
Perhaps the easiest way is to dot source the function:
$val = 0
function foo()
{
$val = 10
}
. foo
write "The number is: $val"
The difference here being that you call foo via . foo.
Dot sourcing runs the function as normal, but it runs inside the parent scope, so there is no child scope. This basically removes scoping. It's only an issue if you start setting or overwriting variables/definitions in the parent scope unintentionally. For small scripts this isn't usually the case, which makes dot sourcing really easy to work with.
If you only want a single variable, then you can use a return value, e.g.,
$val = 0
function foo()
{
return 10
}
$val = foo
write "The number is: $val"
(Or without the return, as it's not necessary in a function)
You can also return multiple values to set multiple variables in this manner, e.g., $a, $b, $c = foo if foo returned 1,2,3.
The above approaches are my preferred way to handle cross-scope variables.
As a last alternative, you can also place the write in the function itself, so it's writing the variable in the same scope as it's being defined.
Of course, you can also use the scope namespace solutions, Set-Variable by scope, or pass it via [ref] as demonstrated by others here. There are many solutions in PowerShell.