I need to keep track of port assignments for users. I have a csv that contains this:
USERNAME,GUI,DCS,SRS,LATC,TRSP
joeblow,8536,10631,5157,12528,14560
,8118,10979,5048,12775,14413
,8926,10303,5259,12371,14747
,8351,10560,5004,12049,14530
johndoe,8524,10267,5490,12809,14493
,8194,10191,5311,12275,14201
,8756,10813,5714,12560,14193
,8971,10006,5722,12078,14378
janblow,8410,10470,5999,12123,14610
bettydoe,8611,10448,5884,12040,14923
,8581,10965,5832,12400,14230
,8708,10005,5653,12111,14374
,8493,10016,5464,12827,14115
I need to be able to add users and remove users which will leave the csv looking as it does now. I have the remove part with this bit of code:
[io.file]::readalltext("c:\scripts\RNcsv.csv").replace("$username","") | Out-File c:\scripts\RNcsv.csv -Encoding ascii -Force
I tried the reverse of the code above but it does not want to work with empty value in that context. I have been unsuccessful finding a way to add $username to a single record. The first record with an empty name column to be precise. So when joeshmo comes along he ends up in the record below joeblow. This csv represents that people have come and gone.
I would take an object oriented approach using Import-Csv and a re-usable function that takes the input from pipeline:
function Add-User {
param(
[Parameter(Mandatory)]
[string] $Identity,
[Parameter(Mandatory, ValueFromPipeline, DontShow)]
[object] $InputObject
)
begin { $processed = $false }
process {
# if the user has already been added or the UserName column is populated
if($processed -or -not [string]::IsNullOrWhiteSpace($InputObject.UserName)) {
# output this object as-is and go to the next object
return $InputObject
}
# if above condition was not met we can assume this is an empty value in the
# UserName column, so set the new Identity to this row
$InputObject.UserName = $Identity
# output this object
$InputObject
# and set this variable to `$true` to skip further updates on the csv
$processed = $true
}
}
Adding a new user to the Csv would be:
(Import-Csv .\test.csv | Add-User -Identity santiago) | Export-Csv .\test.csv -NoTypeInformation
Note that, since the above is reading and writing to the same file in a single pipeline, the use of the Grouping operator ( ) is mandatory to consume all output from Import-Csv and hold the object in memory. Without it you would end up with an empty file.
Otherwise just break it into 2 steps (again, this is only needed if reading and writing to the same file):
$csv = Import-Csv .\test.csv | Add-User -Identity santiago
$csv | Export-Csv .\test.csv -NoTypeInformation
Adding this slight modification to the function posted above allowing the ability to add multiple users in one function call. All credits to iRon for coming up with a clever and and concise solution.
function Add-User {
param(
[Parameter(Mandatory)]
[string[]] $Identity,
[Parameter(Mandatory, ValueFromPipeline, DontShow)]
[object] $InputObject
)
begin { [System.Collections.Queue] $queue = $Identity }
process {
# if there are no more Identities in Queue or the UserName column is populated
if(-not $queue.Count -or -not [string]::IsNullOrWhiteSpace($InputObject.UserName)) {
# output this object as-is and go to the next object
return $InputObject
}
# if above condition was not met we can assume this is an empty value in the
# UserName column, so dequeue this Identity and set it to this row
$InputObject.UserName = $queue.Dequeue()
# output this object
$InputObject
}
}
(Import-Csv .\test.csv | Add-User -Identity Santiago, 4evernoob, mrX, iRon) | Export-Csv ...
In addition to where you ask for and #Santiago's helpful answer (and note), you might want to be able to add multiple usernames at once to avoid that you need to recreate the whole file for each user you want to add.
$Csv = ConvertFrom-Csv #'
USERNAME, GUI, DCS, SRS, LATC, TRSP
joeblow, 8536, 10631, 5157, 12528, 14560
, 8118, 10979, 5048, 12775, 14413
, 8926, 10303, 5259, 12371, 14747
, 8351, 10560, 5004, 12049, 14530
johndoe, 8524, 10267, 5490, 12809, 14493
, 8194, 10191, 5311, 12275, 14201
, 8756, 10813, 5714, 12560, 14193
, 8971, 10006, 5722, 12078, 14378
janblow, 8410, 10470, 5999, 12123, 14610
bettydoe, 8611, 10448, 5884, 12040, 14923
, 8581, 10965, 5832, 12400, 14230
, 8708, 10005, 5653, 12111, 14374
, 8493, 10016, 5464, 12827, 14115
'#
$NewUser = 'Santiago', '4evernoob', 'mrX', 'iRon'
$Csv |ForEach-Object { $i = 0 } {
if (!$_.USERNAME) { $_.USERNAME = $NewUser[$i++] }
$_
} |Format-Table
USERNAME GUI DCS SRS LATC TRSP
-------- --- --- --- ---- ----
joeblow 8536 10631 5157 12528 14560
Santiago 8118 10979 5048 12775 14413
4evernoob 8926 10303 5259 12371 14747
mrX 8351 10560 5004 12049 14530
johndoe 8524 10267 5490 12809 14493
iRon 8194 10191 5311 12275 14201
8756 10813 5714 12560 14193
8971 10006 5722 12078 14378
janblow 8410 10470 5999 12123 14610
bettydoe 8611 10448 5884 12040 14923
8581 10965 5832 12400 14230
8708 10005 5653 12111 14374
8493 10016 5464 12827 14115
Note that an outbound index (as e.g. NewUser[99]) returns a $Null (which is casted to an empty string) by default. This feature will produce an error if you set the StricMode to a higher level.
To overcome this, you might also do something like this instead:
if (!$_.USERNAME -and $i -lt #($NewUser).Count) { ...
I'm having an issue getting a loop in a function to properly. The goal is to compare the output of some JSON data to existing unified groups in Office 365, and if the group already exists, skip it, otherwise, create a new group. The tricky part is that as part of function that creates the group, it prepends "gr-" to the group name. Because the compare function is comparing the original JSON data without the prepended data to Office 365, the compare function has to have the logic to prepend "gr-" on the fly. If there is a better way to accomplish this last piece, I am certainly open to suggestions.
Here is the latest version of the function. There have been other variations, but none so far have worked. There are no errors, but the code does not identify lists that definitely do exist. I am using simple echo statements for the purpose of testing, the actual code will include the function to create a new group.
# Test variable that cycles through each .json file.
$jsonFiles = Get-ChildItem -Path "c:\tmp\json" -Filter *.json |
Get-Content -Raw
$allobjects = ForEach-Object {
$jsonFiles | ConvertFrom-Json
}
$alreadyCreatedGroup = ForEach-Object {Get-UnifiedGroup | select alias}
# Determine if list already exists in Office 365
function checkForExistingGroup {
[CmdletBinding()]
Param(
[Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true)]
$InputObject
)
Process {
if ("gr-$($InputObject.alias)" -like $alreadyCreatedGroup) {
echo "Group exists"
} else {
echo "Group does not exist"
}
}
}
$allobjects | checkForExistingGroup
#$alreadyCreatedGroup | checkForExistingGroup
The above code always produces "Group does not exist" for each alias from the JSON data.
The individual variables appear to be outputting correctly:
PS> $alreadyCreatedGroup
Alias
-----
gr-jsonoffice365grouptest1
gr-jsonoffice365grouptest2
gr-jsonoffice365grouptest3
PS> $allobjects.alias
jsonoffice365grouptest3
jsonoffice365grouptest4
If I run the following on its own:
"gr-$($allobjects.alias)"
I get the following output:
gr-jsonoffice365grouptest3 jsonoffice365grouptest4
So on its own it appends the output from the JSON files, but I had hoped by using $InputObject in the function, this would resolve that issue.
Well, a single group will never be -like a list of groups. You want to check if the list of groups contains the alias.
if ($alreadyCreatedGroup -contains "gr-$($InputObject.Alias)") {
echo "Group exists"
} else {
echo "Group does not exist"
}
In PowerShell v3 or newer you could also use the -in operator instead of the -contains operator, which feels more natural to most people:
if ("gr-$($InputObject.Alias)" -in $alreadyCreatedGroup) {
echo "Group exists"
} else {
echo "Group does not exist"
}
And I'd recommend to pass the group list to the function as a parameter rather than using a global variable:
function checkForExistingGroup {
[CmdletBinding()]
Param(
[Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true)]
$InputObject,
[Parameter(Mandatory=$true, ValueFromPipeline=$false)]
[array]$ExistingGroups
)
...
}
"gr-$($allobjects.Alias)" doesn't produce the result you expect, because the expression basically means: take the Alias properties of all elements in the array/collection $allobjects, concatenate their values with the $OFS character (output field separator), then insert the result after the substring "gr-". That doesn't affect your function, though, because the pipeline already unrolls the input array, so the function sees one input object at a time.
Curious about how to loop through a hash table where each value is an array. Example:
$test = #{
a = "a","1";
b = "b","2";
c = "c","3";
}
Then I would like to do something like:
foreach ($T in $test) {
write-output $T
}
Expected result would be something like:
name value
a a
b b
c c
a 1
b 2
c 3
That's not what currently happens and my use case is to basically pass a hash of parameters to a function in a loop. My approach might be all wrong, but figured I would ask and see if anyone's tried to do this?
Edit**
A bit more clarification. What I'm basically trying to do is pass a lot of array values into a function and loop through those in the hash table prior to passing to a nested function. Example:
First something like:
$parameters = import-csv .\NewComputers.csv
Then something like
$parameters | New-LabVM
Lab VM Code below:
function New-LabVM
{
[CmdletBinding()]
Param (
# Param1 help description
[Parameter(Mandatory=$true,
Position=0,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)]
[Alias("p1")]
[string[]]$ServerName,
# Param2 help description
[Parameter(Position = 1)]
[int[]]$RAM = 2GB,
# Param3 help description
[Parameter(Position=2)]
[int[]]$ServerHardDriveSize = 40gb,
# Parameter help description
[Parameter(Position=3)]
[int[]]$VMRootPath = "D:\VirtualMachines",
[Parameter(Position=4)]
[int[]]$NetworkSwitch = "VM Switch 1",
[Parameter(Position=4)]
[int[]]$ISO = "D:\ISO\Win2k12.ISO"
)
process
{
New-Item -Path $VMRootPath\$ServerName -ItemType Directory
$Arguments = #{
Name = $ServerName;
MemoryStartupBytes = $RAM;
NewVHDPath = "$VMRootPath\$ServerName\$ServerName.vhdx";
NewVHDSizeBytes = $ServerHardDriveSize
SwitchName = $NetworkSwitch;}
foreach ($Argument in $Arguments){
# Create Virtual Machines
New-VM #Arguments
# Configure Virtual Machines
Set-VMDvdDrive -VMName $ServerName -Path $ISO
Start-VM $ServerName
}
# Create Virtual Machines
New-VM #Arguments
}
}
What you're looking for is parameter splatting.
The most robust way to do that is via hashtables, so you must convert the custom-object instances output by Import-Csv to hashtables:
Import-Csv .\NewComputers.csv | ForEach-Object {
# Convert the custom object at hand to a hashtable.
$htParams = #{}
$_.psobject.properties | ForEach-Object { $htParams[$_.Name] = $_.Value }
# Pass the hashtable via splatting (#) to the target function.
New-LabVM #htParams
}
Note that since parameter binding via splatting is key-based (the hashtable keys are matched against the parameter names), it is fine to use a regular hashtable with its unpredictable key ordering (no need for an ordered hashtable ([ordered] #{ ... }) in this case).
Try this:
for($i=0;$i -lt $test.Count; $i++)
{$test.keys | %{write-host $test.$_[$i]}}
Weirdly, it outputs everything in the wrong order (because $test.keys outputs it backwards).
EDIT: Here's your solution.
Using the [System.Collections.Specialized.OrderedDictionary] type, you guarantee that the output will come out the same order as you entered it.
$test = [ordered] #{
a = "a","1";
b = "b","2";
c = "c","3";
}
After running the same solution code as before, you get exactly the output you wanted.
I am writing a PowerShell-based XML module for our application configuration needs. Following is the one of the functions.
<#
.Synopsis
To update an XML attribute value
.DESCRIPTION
In the XML file for a particular attribute, if it contains valueToFind then replace it with valueToReplace
.EXAMPLE
-------------------------------Example 1 -------------------------------------------------------------------
Update-XMLAttribute -Path "C:\web.Config" -xPath "/configuration/system.serviceModel/behaviors/serviceBehaviors/behavior/serviceMetadata" -attribute "externalMetadataLocation" -valueToFind "http:" -ValueToReplace "https:"
Look for the XPath expression with the attribute mentioned and search whether the value contains "http:". If so, change that to "https":
.EXAMPLE
-------------------------------Example 2 -------------------------------------------------------------------
Update-XMLAttribute -Path "C:\web.Config" -xPath "/configuration/system.serviceModel/behaviors/serviceBehaviors/behavior/serviceMetadata" -attribute "externalMetadataLocation" -valueToFind "http:" -ValueToReplace "https:"
Same as Example 1 except that the attribute name is passed as part of the XPath expression
#>
function Update-XMLAttribute
{
[CmdletBinding()]
[OutputType([int])]
Param
(
# Web configuration file full path
[Parameter(Mandatory=$true,
ValueFromPipelineByPropertyName=$true,
Position=0)]
[string]$Path,
# XPath expression up to the parent node
[string] $xPath,
# This parameter is optional if you mentioned it in xPath itself
[string] $attribute,
[string] $valueToFind,
[string] $ValueToReplace
)
Try
{
If (Test-path -Path $Path)
{
$xml = New-Object XML
$xml.Load($Path)
# If the xPath expression itself contains an attribute name then the value of attribute will be processed and taken
If ($xPath.Contains("#")) {
$xPath, $attribute = $xPath -split '/#', 2
}
# Getting the node value using xPath
$Items = Select-Xml -XML $xml -XPath $xPath
ForEach ($Item in $Items)
{
$attributeValue = $Item.node.$attribute
Write-Verbose "Attribute value is $attributeValue "
if ($attributeValue.contains($valueToFind)) {
Write-Verbose "In the attribute $attributeValue - $valueToFind is to be repalced with $ValueToReplace"
$Item.node.$attribute = $attributeValue.replace($valueToFind, $ValueToReplace)
}
}
$xml.Save($Path)
Write-Verbose " Update-XMLAttribute is completed successfully"
}
Else {
Write-Error " The $path is not present"
}
}
Catch {
Write-Error "$_.Exception.Message"
Write-Error "$_.Exception.ItemName"
Write-Verbose " Update-XMLAttribute is failed"
}
} # End Function Update-XMLAttribute
As this cmdlet will be consumed by many I don't think simply writing into console will be the right approach.
As of now in my script if no errors, I can assume that mine is successfully completed.
What is the standard practice to get the results from a PowerShell cmdlet so that the consumer knows whether it is successfully completed or not?
The standard practice is to throw exceptions. Each different type of error has a separate exception type which can be used to diagnose further.
Say, file is not represented, you do this:
if (-not (Test-Path $file))
{
throw [System.IO.FileNotFoundException] "$file not found."
}
Your cmdlet should document all the possible exceptions it will throw, and when.
Your function should throw if it runs into an error. Leave it to the caller to decide how the error should be treated (ignore, log a message, terminate, whatever).
While you can throw an exception, which PowerShell will catch and wrap in an ErrorRecord, you have more flexibility using the ThrowTerminatingError method. This is the typical approach for a C# based cmdlet.
ThrowTerminatingError(new ErrorRecord(_exception, _exception.GetType().Name, ErrorCategory.NotSpecified, null));
This allows you to pick an error category and provide the target object. BTW what you have above isn't what I'd call a cmdlet. Cmdlets are compiled C# (typically). What you have is an advanced function. :-)
From an advanced function you can access this method like so:
$pscmdlet.ThrowTerminatingError(...)
Let's say I want to write a helper function that wraps Read-Host. This function will enhance Read-Host by changing the prompt color, calling Read-Host, then changing the color back (simple example for illustrative purposes - not actually trying to solve for this).
Since this is a wrapper around Read-Host, I don't want to repeat the all of the parameters of Read-Host (i.e. Prompt and AsSecureString) in the function header. Is there a way for a function to take an unspecified set of parameters and then pass those parameters directly into a cmdlet call within the function? I'm not sure if Powershell has such a facility.
for example...
function MyFunc( [string] $MyFuncParam1, [int] $MyFuncParam2 , Some Thing Here For Cmdlet Params that I want to pass to Cmdlet )
{
# ...Do some work...
Read-Host Passthru Parameters Here
# ...Do some work...
}
It sounds like you're interested in the 'ValueFromRemainingArguments' parameter attribute. To use it, you'll need to create an advanced function. See the about_Functions_Advanced and about_Functions_Advanced_Parameters help topics for more info.
When you use that attribute, any extra unbound parameters will be assigned to that parameter. I don't think they're usable as-is, though, so I made a little function that will parse them (see below). After parsing them, two variables are returned: one for any unnamed, positional parameters, and one for named parameters. Those two variables can then be splatted to the command you want to run. Here's the helper function that can parse the parameters:
function ParseExtraParameters {
[CmdletBinding()]
param(
[Parameter(ValueFromRemainingArguments=$true)]
$ExtraParameters
)
$ParamHashTable = #{}
$UnnamedParams = #()
$CurrentParamName = $null
$ExtraParameters | ForEach-Object -Process {
if ($_ -match "^-") {
# Parameter names start with '-'
if ($CurrentParamName) {
# Have a param name w/o a value; assume it's a switch
# If a value had been found, $CurrentParamName would have
# been nulled out again
$ParamHashTable.$CurrentParamName = $true
}
$CurrentParamName = $_ -replace "^-|:$"
}
else {
# Parameter value
if ($CurrentParamName) {
$ParamHashTable.$CurrentParamName += $_
$CurrentParamName = $null
}
else {
$UnnamedParams += $_
}
}
} -End {
if ($CurrentParamName) {
$ParamHashTable.$CurrentParamName = $true
}
}
,$UnnamedParams
$ParamHashTable
}
You could use it like this:
PS C:\> ParseExtraParameters -NamedParam1 1,2,3 -switchparam -switchparam2:$false UnnamedParam1
UnnamedParam1
Name Value
---- -----
switchparam True
switchparam2 False
NamedParam1 {1, 2, 3}
Here are two functions that can use the helper function (one is your example):
function MyFunc {
[CmdletBinding()]
param(
[string] $MyFuncParam1,
[int] $MyFuncParam2,
[Parameter(Position=0, ValueFromRemainingArguments=$true)]
$ExtraParameters
)
# ...Do some work...
$UnnamedParams, $NamedParams = ParseExtraParameters #ExtraParameters
Read-Host #UnnamedParams #NamedParams
# ...Do some work...
}
function Invoke-Something {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true, Position=0)]
[string] $CommandName,
[Parameter(ValueFromRemainingArguments=$true)]
$ExtraParameters
)
$UnnamedParameters, $NamedParameters = ParseExtraParameters #ExtraParameters
&$CommandName #UnnamedParameters #NamedParameters
}
After importing all three functions, try these commands:
MyFunc -MyFuncParam1 Param1Here "PromptText" -assecure
Invoke-Something -CommandName Write-Host -Fore Green "Some text" -Back Red
One word: splatting.
Few more words: you can use combination of $PSBoundParameters and splatting to pass parameters from external command, to internal command (assuming names match). You would need to remove any parameter that you don't want to use though from $PSBoundParameters first:
$PSBoundParameters.Remove('MyFuncParam1')
$PSBoundParameters.Remove('MyFuncParam2')
Read-Host #PSBoundParameters
EDIT
Sample function body:
function Read-Data {
param (
[string]$First,
[string]$Second,
[string]$Prompt,
[switch]$AsSecureString
)
$PSBoundParameters.Remove('First') | Out-Null
$PSBoundParameters.Remove('Second') | Out-Null
$Result = Read-Host #PSBoundParameters
"First: $First Second: $Second Result: $Result"
}
Read-Data -First Test -Prompt This-is-my-prompt-for-read-host