I'm brand new to powershell, as in less than a day experience. I have an array of objects returned from a Get-ADUser call. I will be doing a lot of lookups so thought it best to build a hashtable from it.
Is there a shorthand way to initialize the hashtable with this array and specify one of the object's attributes to use as a key?
Or do I have to loop the whole array and manually add to the set?
$adSet = #{}
foreach ($user in $allusers) {
$adSet.add($user.samAccountname, $user)
}
[...] do I have to loop
Yes
... the whole array ...
No
You don't have to materialize an array and use a loop statement (like foreach(){...}), you can use the pipeline to turn a stream of objects into a hashtable as well, using the ForEach-Object cmdlet - this might prove faster if the input source (in the example below, that would be Get-Service) is slow:
$ServiceTable = Get-Service |ForEach-Object -Begin { $ht = #{} } -Process { $ht[$_.Name] = $_ } -End { return $ht }
The block passed as -Begin will execute once (at the beginning), the block passed to -Process will execute once per pipeline input item, and the block passed to -End will execute once, after all the input has being recevied and processed.
With your example, that would look something like this:
$ADUserTable = Get-ADUser -Filter * |ForEach-Object -Begin { $ht = #{} } -Process { $ht[$_.SAMAccountName] = $_ } -End { return $ht }
Every single "cmdlet" in PowerShell maps onto this Begin/Process/End lifecycle, so generalizing this pattern with a custom function is straightforward:
function New-LookupTable {
param(
[Parameter(Mandatory, ValueFromPipeline)]
[array]$InputObject,
[Parameter(Mandatory)]
[string]$Property
)
begin {
# initialize table
$lookupTable = #{}
}
process {
# populate table
foreach($object in $InputObject){
$lookupTable[$object.$Property] = $object
}
}
end {
return $lookupTable
}
}
And use like:
$ADUserTable = Get-ADUser |New-LookupTable -Property SAMAccountName
See the about_Functions_Advanced document and related help topics for more information about writing advanced and pipeline-enabled functions
Related
I have a function that flattens directories in parallel for multiple folders. It works great when I call it in a non-pipeline fashion:
$Files = Get-Content $FileList
Merge-FlattenDirectory -InputPath $Files
But now I want to update my function to work both on the pipeline as well as when called off the pipeline. Someone on discord recommended the best way to do this is to defer all processing to the end block, and use the begin and process blocks to add pipeline input to a list. Basically this:
function Merge-FlattenDirectory {
[CmdletBinding()]
param (
[Parameter(Mandatory,Position = 0,ValueFromPipeline)]
[string[]]
$InputPath
)
begin {
$List = [System.Collections.Generic.List[PSObject]]#()
}
process {
if(($InputPath.GetType().BaseType.Name) -eq "Array"){
Write-Host "Array detected"
$List = $InputPath
} else {
$List.Add($InputPath)
}
}
end {
$List | ForEach-Object -Parallel {
# Code here...
} -ThrottleLimit 16
}
}
However, this is still not working on the pipeline for me. When I do this:
$Files | Merge-FlattenDirectory
It actually passes individual arrays of length 1 to the function. So testing for ($InputPath.GetType().BaseType.Name) -eq "Array" isn't really the way forward, as only the first pipeline value gets used.
My million dollar question is the following:
What is the most robust way in the process block to differentiate between pipeline input and non-pipeline input? The function should add all pipeline input to a generic list, and in the case of non-pipeline input, should skip this step and process the collection as-is moving directly to the end block.
The only thing I could think of is the following:
if((($InputPath.GetType().BaseType.Name) -eq "Array") -and ($InputPath.Length -gt 1)){
$List = $InputPath
} else {
$List.Add($InputPath)
}
But this just doesn't feel right. Any help would be extremely appreciated.
You might just do
function Merge-FlattenDirectory {
[CmdletBinding()]
param (
[Parameter(Mandatory,Position = 0,ValueFromPipeline)]
[string[]]
$InputPath
)
begin {
$List = [System.Collections.Generic.List[String]]::new()
}
process {
$InputPath.ForEach{ $List.Add($_) }
}
end {
$List |ForEach-Object -Parallel {
# Code here...
} -ThrottleLimit 16
}
}
Which will process the input values either from the pipeline or the input parameter.
But that doesn't comply with the Strongly Encouraged Development Guidelines to Support Well Defined Pipeline Input (SC02) especially for Implement for the Middle of a Pipeline
This means if you correctly want to implement the PowerShell Pipeline, you should directly (parallel) process your items in the Process block and immediately output any results from there:
function Merge-FlattenDirectory {
[CmdletBinding()]
param (
[Parameter(Mandatory,Position = 0,ValueFromPipeline)]
[string[]]
$InputPath
)
begin {
$SharedPool = New-ThreadPool -Limit 16
}
process {
$InputPath |ForEach-Object -Parallel -threadPool $Using:SharedPool {
# Process your current item ($_) here ...
}
}
}
In general, script authors are advised to use idiomatic PowerShell which often comes down to lesser object manipulations and usually results in a correct PowerShell pipeline implementation with less memory usage.
Please let me know if you intent to collect (and e.g. order) the output based on this suggestion.
Caveat
The full invocation of the ForEach-Object -Parallel cmdlet itself is somewhat inefficient as you open and close a new pipeline each iteration. To resolve this, my whole general statement about idiomatic PowerShell falls a bit apart, but should be resolvable by using a steppable pipeline.
To implement this, you might use the ForEach-Object cmdlet as a template:
[System.Management.Automation.ProxyCommand]::Create((Get-Command ForEach-Object))
And set the ThrottleLimit of the ThreadPool in the Begin Block
function Merge-FlattenDirectory {
[CmdletBinding()]
param (
[Parameter(Mandatory,Position = 0,ValueFromPipeline)]
[string[]]
$InputPath
)
begin {
$PSBoundParameters += #{
ThrottleLimit = 4
Parallel = {
Write-Host (Get-Date).ToString('HH:mm:ss.s') 'Started' $_
Start-Sleep -Seconds 3
Write-Host (Get-Date).ToString('HH:mm:ss.s') 'finished' $_
}
}
$wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('ForEach-Object', [System.Management.Automation.CommandTypes]::Cmdlet)
$scriptCmd = {& $wrappedCmd #PSBoundParameters }
$steppablePipeline = $scriptCmd.GetSteppablePipeline($myInvocation.CommandOrigin)
$steppablePipeline.Begin($PSCmdlet)
}
process {
$InputPath.ForEach{ $steppablePipeline.Process($_) }
}
end {
$steppablePipeline.End()
}
}
1..5 |Merge-FlattenDirectory
17:57:40.40 Started 3
17:57:40.40 Started 2
17:57:40.40 Started 1
17:57:40.40 Started 4
17:57:43.43 finished 3
17:57:43.43 finished 1
17:57:43.43 finished 4
17:57:43.43 finished 2
17:57:43.43 Started 5
17:57:46.46 finished 5
Here's how I would write it with comments where I have changed it.
function Merge-FlattenDirectory {
[CmdletBinding()]
param (
[Parameter(Mandatory,Position = 0,ValueFromPipeline)]
$InputPath # <this may be a string, a path object, a file object,
# or an array
)
begin {
$List = #() # Use an array for less than 100K objects.
}
process {
#even if InputPath is a string for each will iterate once and set $p
#if it is an array of strings add each. If it is one or more objects,
#try to find the right property for the path.
foreach ($p in $inputPath) {
if ($p -is [String]) {$list += $p }
elseif ($p.Path) {$list += $p.Path}
elseif ($p.FullName) {$list += $p.FullName}
elseif ($p.PSPath) {$list += $p.PSPath}
else {Write-warning "$P makes no sense"}
}
}
end {
$List | ForEach-Object -Parallel {
# Code here...
} -ThrottleLimit 16
}
}
#iRon That "write for the middle of the pipeline" in the docs does not mean write everything in the process block .
function one { #(1,2,3,4,5) }
function two {
param ([parameter(ValueFromPipeline=$true)] $p )
begin {Write-host "Two begins" ; $a = #() }
process {Write-host "Two received $P" ; $a += $p }
end {Write-host "Two ending" ; $a; Write-host "Two ended"}
}
function three {
param ([parameter(ValueFromPipeline=$true)] $p )
begin {Write-host "three Starts" }
process {Write-host "Three received $P" }
end {Write-host "Three ended" }
}
one | two | three
One is treated as an end block.
One, two and three all run their begins (one's is empty).
One's output goes to the process block in two, which just collects the data. Two's end block starts after one's end-block ends, and sends output
At this point three's process block gets input. After two's end block ends, three's endblock runs.
Two is "in the middle" it has a process block to deal with multiple piped items (if it were all one 'end' block it would only process the last one).
I have a script that grabs a series of information from SQL. It then parses the information and passes it to a series of arrays. I want to then pass each array to a separate script.
I've seen Start-job should be able to do this but form my testing it didn't seem to work. This is what I have tried. Each Script individually works, and I am currently just using CVS's to pass the information.
Once the information is in the script I need to be able to call specific properties from each object. I did get it to just print the array as a string, but I couldn't call anything specific.
Invoke-Sqlcmd -Query $Q1 -ServerInstance $I -Database $DB | Export-Csv "$Files\Employees.csv"
$emps = Import-Csv "$Files\Employees.csv"
$newaccounts = #()
$deacaccounts = #()
$changedusers = #()
if(Test-Path -Path "$Files\Employees.csv"){
foreach ($emp in $emps) {
if ($emp.emp_num.trim() -ne $emp.EmpNum) {
$newaccounts += $emp
}
if ($emp.emp_num.trim() -eq $emp.EmpNum) {
if ($emp.fname -ne $emp.GivenName -and $emp.lname -ne $emp.SurName) {
$deacaccounts += $emp
$newaccounts += $emp
}
else ($emp.dept -ne $emp.DepartmentNumber -or $emp.job_title -ne $emp.JobTitle) {
$changedusers += $emp
}
}
}
}
Start-job -path "script" -argumentlist (,$deacaccounts)
Start-job -path "script" -argumentlist (,$changedusers)
Start-job -path "script" -argumentlist (,$newaccounts )
EDIT:
The Information passed to the scripts would be multiple lines of employee data. I need to be able to grab that info in the "Sub" scripts and perform actions based on them.
EX:
Deacaccounts =
fname
Lname
empnum
ted
kaz
1234
sam
cart
245
If you really need background jobs - it turns out that you don't - note that Start-Job doesn't have a -Path parameter; you'd have use -ScriptBlock { & "$script" } instead.
To simply invoke the script in the foreground, in sequence, use the following (script representing your .ps1 file path(s)):
& "script" $deacaccounts
& "script" $changedusers
& "script" $newaccounts
Note: &, the call operator, is only needed if the script / executable path is quoted and/or contains variable references (or subexpresions); e.g., a script with path c:\foo\bar.ps1 may be invoked without &; e.g.
c:\foo\bar.ps1 $deacaccounts
Note that your script(s) will receive a single argument each, containing an array of values.
If instead, you wanted to pass the array elements as individual (positional) arguments, you'd have to use splatting, where you use sigil # instead of $ to pass your variable (e.g.,
& "script" #deaccounts).
If you need to enumerate the arrays and pass each object individually as a parameter, use the following:
foreach ($obj in $deaccounts) { & "script" $obj }
foreach ($obj in $changedusers) { & "script" $obj }
foreach ($obj in $newaccounts) { & "script" $obj }
If each object should be splatted positionally based on its property values:
foreach ($obj in $deaccounts) {
$vals = $obj.psobject.Properties.Value
& "script" #vals
}
# ... ditto for $changeduser and $newaccounts
If each object should be splatted by property names, based on both property names and values, you need to convert each object to a hashtable first:
foreach ($obj in $deaccounts) {
$params = #{}
foreach ($prop in $obj.psobject.Properties) {
$params[$prop.Name] = $prop.Value
}
& "script" #params
}
# ... ditto for $changeduser and $newaccounts
As an aside: Incrementally extending arrays in a loop with += is inefficient, because a new array must be created behind the scenes in every iteration, because arrays are of fixed size.
In general, a much more efficient approach is to use a foreach loop as an expression and let PowerShell itself collect the outputs in an array: [array] $outputs = foreach (...) { ... } - see this answer.
In case you need to create arrays manually, e.g to create multiple ones, such as in your case, use an efficiently extensible list type, such as [System.Collections.Generic.List[object]] - see this answer.
I have small error when running my code. I assign a string to custom object but it's parsing the string by itself and throwing an error.
Code:
foreach ($item in $hrdblistofobjects) {
[string]$content = Get-Content -Path $item
[string]$content = $content.Replace("[", "").Replace("]", "")
#here is line 43 which is shown as error as well
foreach ($object in $listofitemsdb) {
$result = $content -match $object
$OurObject = [PSCustomObject]#{
ObjectName = $null
TestObjectName = $null
Result = $null
}
$OurObject.ObjectName = $item
$OurObject.TestObjectName = $object #here is line 52 which is other part of error
$OurObject.Result = $result
$Resultsdb += $OurObject
}
}
This code loads an item and checks if an object exists within an item. Basically if string part exists within a string part and then saves result to a variable. I am using this code for other objects and items but they don't have that \p part which I am assuming is the issue. I can't put $object into single quotes for obvious reasons (this was suggested on internet but in my case it's not possible). So is there any other option how to unescape \p? I tried $object.Replace("\PMS","\\PMS") but that did not work either (this was suggested somewhere too).
EDIT:
$Resultsdb = #(foreach ($item in $hrdblistofobjects) {
[string]$content = Get-Content -Path $item
[string]$content = $content.Replace("[", "").Replace("]", "")
foreach ($object in $listofitemsdb) {
[PSCustomObject]#{
ObjectName = $item
TestObjectName = $object
Result = $content -match $object
}
}
}
)
$Resultsdb is not defined as an array, hence you get that error when you try to add one object to another object when that doesn't implement the addition operator.
You shouldn't be appending to an array in a loop anyway. That will perform poorly, because with each iteration it creates a new array with the size increased by one, copies all elements from the existing array, puts the new item in the new free slot, and then replaces the original array with the new one.
A better approach is to just output your objects in the loop and collect the loop output in a variable:
$Resultsdb = foreach ($item in $hrdblistofobjects) {
...
foreach ($object in $listofitemsdb) {
[PSCustomObject]#{
ObjectName = $item
TestObjectName = $object
Result = $content -match $object
}
}
}
Run the loop in an array subexpression if you need to ensure that the result is an array, otherwise it will be empty or a single object when the loop returns less than two results.
$Resultsdb = #(foreach ($item in $hrdblistofobjects) {
...
})
Note that you need to suppress other output on the default output stream in the loop, so that it doesn't pollute your result.
I changed the match part to this and it's working fine $result = $content -match $object.Replace("\PMS","\\PMS").
Sorry for errors in posting. I will amend that.
I currently have 4 arrays with different names of Organizational unit from our Active Directory.
So I do a big evaluation and in order not to create a separate ForEach loop for each array (because this are like 400 lines of code) I would like to put the whole thing into a single loop.
However, I need to know when which array is run through so that I can change something for this array in certain places by IF query.
thats because not all arrays can use the code in this way and for example the searchbase for the Active Directory query must be changed for each array.
Here i created a example and described my problem in the comments. (<# #>)
$OU1="1-Users","2-Users","3-Users"
$OU2="1-Computers","2-Computers","3-Computers"
$OU3="1-ServiceAccounts","2-ServiceAccounts","3-ServiceAccounts"
foreach ($ou in $OU1 <#AND OU2,OU3#> ){
if($OU1,$OU2 <#= active#> ){
<# if this array is active - do this code #>
$SearchBase = "OU="+$ou+",OU=SUBOU,DC=intra,DC=lan"
}
if($OU3 <#= active#>){
<# if this array is active - do this code #>
$SearchBase = "OU="+$ou+",DC=intra,DC=lan"
}
<# do this code for all #>
}
I hope you understand what I mean and can help me with my problem. Thank you.
What Lee_Dailey means is this: First create a hashtable with the correct settings, then iterate that:
$ouList = #(
#{ "SearchBase" = "OU=SUBOU,DC=intra,DC=lan"; "OUs" = #("1-Users","2-Users","3-Users") },
#{ "SearchBase" = "OU=SUBOU,DC=intra,DC=lan"; "OUs" = #("1-Computers","2-Computers","3-Computers") },
#{ "SearchBase" = "DC=intra,DC=lan"; "OUs" = #("1-ServiceAccounts","2-ServiceAccounts","3-ServiceAccounts") }
)
foreach ($item in $ouList)
{
foreach ($ou in $item.OUs)
{
$searchBase = "OU=" + $ou + "," + $item.SearchBase
}
}
One way to do this is to append your arrays in an expression (using +). This will effectively create a single collection that you can then use the -in operator to find a match.
$OU1="1-Users","2-Users","3-Users"
$OU2="1-Computers","2-Computers","3-Computers"
$OU3="1-ServiceAccounts","2-ServiceAccounts","3-ServiceAccounts"
foreach ($ou in $OU1+$OU2+$OU3 ){
if( $ou -in $OU1+$OU2 ){
<# if this array is active - do this code #>
$SearchBase = "OU="+$ou+",OU=SUBOU,DC=intra,DC=lan"
}
if ($ou -in $OU3){
<# if this array is active - do this code #>
$SearchBase = "OU="+$ou+",DC=intra,DC=lan"
}
<# do this code for all #>
}
I am attempting to pass a list of objects from one function to another, one by one.
First function: generate a list of users (objects) near expiry;
Second function: send an email to each user (object)
The first function works fine and outputs a group of objects (or so it would seem) and the second function will accept input and email a single user without issue.
Issues arise only when multiple objects are passed from the first function to the second.
Relevant code snippets are below:
The First function creates a custom object for each located user and adds it to an array, which is then outputted in the end block. Below is an extremely simplified snippet of the code with the essential object creation step:
Function 01
{
#param block goes here etc...
Foreach ($user in $users)
{
$userOutput = #()
$userTable = New-Object PSObject -Property #{
name = $User.Name
SamAccountName = $User.SamAccountName
emailAddress = $User.EmailAddress
expired = $user.PasswordExpired
expiryDate = $ExpiryDate.ToShortDateString()
daysTillExpiry = $daysTillExpiry
smtpRecipientAddress = $User.EmailAddress
smtpRecipientName = $User.Name
}
$userOutput += $userTable
}
Write-Output $userOutput
}
I have also tried writing each custom object ($userTable) straight to the console within each iteration of the Foreach (users) loop.
The Second function accepts pipeline input for a number of matching parameters from the first function, e.g:
[Parameter(Mandatory=$true,ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)][string]$smtpRecipientName
The second function also calls a third function designed specifically to send smtp mail and contains no loops, it just takes the current object from the pipeline and deals with it.
I haven't included the full code for either mail function because it is largely irrelevant. I just want to know whether the objects outputted from the first function can be dealt with one-by-one by the second.
At present, the mail function deals with the first object passed to it, and no others.
Update:
This is what I have in mind (but the second function only deals with the last object that was piped in:
Function Test-UserExp
{
$iteration = 0
For ($i=0;$i -le 9;$i++)
{
$iteration ++
$userTable = New-Object PSObject -Property #{
expiryDate = "TestExpDate_$iteration"
daysTillExpiry = "TestDaysTillExpiry_$iteration"
smtpRecipientAddress = "TestSMTPRecipientAddress_$iteration"
smtpRecipientName = "TestSMTPRecipientName_$iteration"
}
$userTable
}
}
Function Test-MailSend
{
Param
(
[Parameter(ValueFromPipelineByPropertyName=$true)][string]$expiryDate,
[Parameter(ValueFromPipelineByPropertyName=$true)][string]$daysTillExpiry,
[Parameter(ValueFromPipelineByPropertyName=$true)][string]$smtpRecipientAddress,
[Parameter(ValueFromPipelineByPropertyName=$true)][string]$smtpRecipientName
)
Write-Host 'Output from Test-MailSend:'
$expiryDate
$daysTillExpiry
$smtpRecipientAddress
$smtpRecipientName
}
First of all: if you want to process objects in a pipeline, one at the time, do not kill experience by collecting all the objects - that's only necessary if you intend to do something about whole collection at some point. If not than just output objects as soon as you get them:
foreach ($user in $users) {
New-Object PSObject -Property #{
name = $User.Name
SamAccountName = $User.SamAccountName
emailAddress = $User.EmailAddress
# ...
}
}
In your case you output whole collection at the end. That's hardly a pipeline experience if you would ask me.
For the second command: if you intend to create parameter for each property, just leave the part 'ValueFromPipeline' out. Otherwise you may end up with whole object converted to string... If you want to take an object as a whole, leave out 'ValueFromPipelineByPropertyName' and specify correct type. And make sure you have process {} wrapped around the code that uses parameters taken from pipeline.
And finally: why would you write a function to send mails? You have Send-MailMessage, so unless you do something this cmdlet doesn't cover, you probably don't need hand-crafted replacement...
In function 1 you want to create the array before the ForEach loop, so you aren't re-creating the array every iteration.
In the param block for the second function, you want to declare the parameter as an array of strings, not just a string.
Finally, when accepting pipeline input for the second function you will need to use the Begin, Process, and End blocks. The part of the function that repeats for each item should be in the Process block.
Here is a short working sample below:
Function fun1{
$users = #(1,2,3)
$userOutput = #()
Foreach ($user in $users){
$userTable = New-Object PSObject -Property #{
emailAddress = "$user#blah.com"
}
$userOutput += $userTable
}
$userOutput
}
Function fun2{
param(
[parameter(ValueFromPipeLine=$true)]
[String[]]$Recipients
)
begin{}
process{
ForEach ($Recipient in $Recipients){
$_
}
}
end{}
}
fun1 | Select emailAddress | fun2
This will give you the output below:
emailAddress
------------
1#blah.com
2#blah.com
3#blah.com
Here is a great breakdown of how the Begin/Process/End blocks work in PowerShell http://technet.microsoft.com/en-us/magazine/hh413265.aspx
function Set-UserExpiry {
1..10 | foreach {
[PSCustomObject]#{
ExpiryDate = "TestExpDate_$_"
DaysTillExpiry = "TestDaysTillExpiry_$_"
SmtpRecipientAddress = "TestSMTPRecipientAddress_$_"
SmtpRecipientName = "TestSMTPRecipientName_$_"
}
}
}
function Test-UserExpiry {
param
(
[Parameter(ValueFromPipelineByPropertyName = $true)]
[string]$ExpiryDate,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[string]$DaysTillExpiry,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[string]$SmtpRecipientAddress,
[Parameter(ValueFromPipelineByPropertyName = $true)]
[string]$SmtpRecipientName
)
process {
Write-Output 'Output from Test-MailSend:'
$expiryDate
$daysTillExpiry
$smtpRecipientAddress
$smtpRecipientName
Write-Output ''
}
}
Set-UserExpiry | Test-UserExpiry