Powershell array issue - powershell

I am experiencing an issue with what should be a very simple task, but for some reason is not working as expected.
I am running this code via the Powershell ISE on a Windows 10 PC with Powershell v5.
GOAL: Create an array of JSON files with the intent of assigning specific values from the JSON data to Powershell variables which will then be fed into an Exchange online function to create thousands of new Office 365 groups.
ISSUE: While values appear to be correctly populating each array, certain variables from the array are being concatenated. See specific errors below.
SAMPLE CODE:
Here is a sample JSON file (note: I am only using a very limited subset of the data in each file):
{
"Alias": "testmigrationlist7",
"DisplayName": "Test Migration List 7",
"IsHiddenFromAddressList": true,
"EmailAddresses": [
{
"Action":"Add",
"Value": "testmigrationlist7#testlab.local",
"AddressPrimary": true,
"AddressProtocol": "SMTP"
}
],
"Members": {
"Recipients": [
{
"Action":"Add",
"Value":"testuser1"
},
{
"Action":"Add",
"Value":"testuser2"
}
]
},
"AcceptMessagesOnlyFrom": {
"All":"restricted",
"Recipients": [
{
"Action":"Remove",
"Value":"testuser1"
},
{
"Action":"Add",
"Value":"testuser2"
}
]
}
}
Get content of all JSON files:
$allObjects = #(Get-ChildItem -path c:\tmp\json\*.json | Get-Content -Raw | ConvertFrom-Json)
If I then test the above array, it appears to output as expected:
$allObjects.displayname
Test Migration List 7
Test Migration List 8
$allObjects.alias
testmigrationlist7
testmigrationlist8
Now the code that takes the above data and loops through the array:
function import-UnixDL2Group {
New-UnifiedGroup -Alias $allobjects.alias -displayname `
$allobjects.displayname -Owner testowner1 -Members `
$allobjects.members.recipients.value `
-emailaddresses $allobjects.emailaddresses.value
}
foreach($_ in $allObjects.alias){import-UnixDL2Group}
The above outputs the following error and stops:
Cannot bind parameter 'Alias' to the target. Exception setting "Alias": "Property expression "testmigrationlist7 testmigrationlist8" isn't valid....."
Notice how it tries to use both aliases with a space for one alias:
"testmigrationlist7 testmigrationlist8"
The same occurs with DisplayName.
If I test with only 1 JSON file, it works correctly:
$JSONinput = (get-content -path c:\tmp\json\test1.json -raw) | ConvertFrom-Json
function import-UnixDL2GroupTest {
New-UnifiedGroup -Alias $JSONinput.alias -displayname $JSONinput.displayname `
-Owner testowner1 -Members $JSONinput.members.recipients.value `
-emailaddresses $JSONinput.emailaddresses.value
}
$JSONinput | import-UnixDL2GroupTest
I am sure I am overlooking something very simple, but the answer eludes me at the moment. Any guidance would be greatly appreciated.
Thank you in advance for your consideration.
UPDATE: I have also tried defining a simple array to take the JSON data out of the picture, but I get the same error, so it must be the foreach loop.
$manualArray = #("testmigrationlist7","testmigrationlist8")
function import-Unix2GroupManual {
New-UnifiedGroup -Alias $manualArray -displayname $manualArray `
-Members testuser1,testuser2
}
foreach($_ in $manualArray){import-Unix2GroupManual}

Your code tries to invoke New-UnifiedGroup with all aliases at once (since you're using the global variable $allobjects inside the function), which doesn't work. Also, $_ is an automatic variable that holds the current object in a pipeline. Overriding automatic variables for your own purposes is generally not recommended, as it tends to lead to ... interesting side effects.
Parametrize your function like this:
function Import-UnixDL2Group {
[CmdletBinding()]
Param(
[Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true)]
$InputObject,
[Parameter(Position=1, Mandatory=$false)]
[String]$Owner = 'testowner1'
)
Process {
New-UnifiedGroup -Alias $InputObject.alias `
-DisplayName $InputObject.displayname `
-Owner $Owner `
-Members $InputObject.members.recipients.value `
-EmailAddresses $InputObject.emailaddresses.value
}
}
and invoke it like this:
$allObjects | Import-UnixDL2Group
and the problem should disappear.

Related

If else statement inside foreach results in overwrite each time it loops

I apologize for the unclear title. I'm finding it hard to articulate my problem. I'm writing a script in powershell for the first time. I primarily use python for short scripts but for this task I'm forced to use powershell because of some limitations where I need to use powercli cmdlets. Let me quickly explain the problem. This is to create and/or assign tags to vms in vsphere.
I read the list of VMs into a variable $vms2tag. These are the ones that need to be tagged.
I read a json file into a variable and create tag name and description variables based on the data in the json (there's key value pairs that i can directly plug into the names and descriptions) This file also has a 'Server' key which has a value of the VM name exactly as it would appear in "Output-VM.csv" file. This file has data about every single VM that exists. Only ones that need tagged the ones in $vms2tag
Based on some if else conditions like if tag category exists, or if tag exists, it will either create one or use/assign one.
Basically the following code "works" in the sense it will create these tags BUT, it will quickly get overwritten by the next $vm until it keeps overwriting each time and the only tag that sticks around on all the $vms is the one created for the last VM in the list.
$myJson = Get-Content 'C:\For-Powershell.json'| Out-String | ConvertFrom-Json
$vms2tag = Get-Content 'C:\Output-VM.txt'
foreach ($vm in $vms2tag) {
For ($j=0; $j -lt $myJson.Length; $j++) {
if ($vm -eq $myJson.Server[$j]) {
Write-Output "Match!"
# Variables for Application Owner
$nameAO = [string]$myJson.Application_Owner[$j]
$descriptionAO = [string]$myJson.Application_Owner[$j]
# check if Tag Category and/or Tag exist
if ((Get-TagCategory -Name "app_owner") -eq $null) {
New-TagCategory -Name "app_owner" -Cardinality "Multiple"
}
if ((Get-Tag -Category "app_owner" | Set-Tag -Name $nameAO -Description $descriptionAO) -eq $null) {
$myTagAO = Get-TagCategory -Name "app_owner" | New-Tag -Name $nameAO -Description $descriptionAO
New-TagAssignment -Tag $myTagAO -Entity $myJson.Server[$j]
}
else {
$myTagAO = Get-Tag -Category "app_owner" | Set-Tag -Name $nameAO -Description $descriptionAO
New-TagAssignment -Tag $myTagAO -Entity $myJson.Server[$j]
}
}
}
}
I tested while the script runs and the tag is properly applied to the VM based on its data but when I refresh it after the script completes, all the tags on each VM exist but they are incorrect as they contain the information that's valid only for the last VM in the $vms2tag list. It seems pretty simple but I just don't see where I'm messing up. My guess is something with if else statements is nested incorrectly? It took me a while (~6 hours) to get this to work as I had other issues with the script but when I finally got the tags to correctly set based on the other conditions, I ended up with this problem so it's possible I'm just burnt out and not seeing it.
The problem is with the Tag logic. The following line is overwriting existing tags every loop:
if ((Get-Tag -Category "app_owner" | Set-Tag -Name $nameAO -Description $descriptionAO) -eq $null) {
The Set-Tag cmdlet should never be used in a test to find existing tags.
I would write the test and assignment block like the following:
$myTagAO = Get-Tag -Category "app_owner" -Name $nameAO -ErrorAction SilentlyContinue
if ($myTagAO -eq $null) {
$myTagAO = Get-TagCategory -Name "app_owner" | New-Tag -Name $nameAO -Description $descriptionAO
}
New-TagAssignment -Tag $myTagAO -Entity $myJson.Server[$j]
This ensures that each tag is only created once, with the appropriate description.

Providing test cases to Pester V5 test

I'm trying to write a pester test (v5) to see if various services are running on remote computers. This is what I have, which works:
$Hashtable = #(
#{ ComputerName = "computer1"; ServiceName = "serviceA" }
#{ ComputerName = "computer1"; ServiceName = "serviceB" }
#{ ComputerName = "computer2" ; ServiceName = "serviceB" }
)
Describe "Checking services" {
It "check <ServiceName> is running on <ComputerName>" -TestCases $Hashtable {
( get-service -computername $ComputerName -name $ServiceName ).status | Should -be "Running"
}
}
My question is around providing the test data to the test (i.e. the list of computer names and services). Suppose I want to add more services to this list. At the moment, I would be modifying my pester file by adding more services to $Hashtable. It doesn't feel quite right to be doing this to me, and I'd like to get the approach correct at this early stage. My gut tells me that the list of services should be separated from the pester file. Then running the test would involve importing the list of services somehow. Does anyone know if I am going about this the wrong way?
Thanks for any help
Andrew
If the list of servers and services will change often, it would be a good idea to read it from a separate file, especially if you have the tests under version control. This way you can easily see in the history that only the test data has changed, but the test logic didn't.
A good file format for the given test data would be CSV:
ComputerName, ServiceName
computer1, serviceA
computer1, serviceB
computer2, serviceB
You can read the CSV using Import-Csv, but you have to convert each row to a hashtable, because Pester expects an array of hashtables for the -TestCases parameter. Import-Csv outputs an array of PSCustomObject though.
BeforeDiscovery {
$script:testCases = Import-Csv $PSScriptRoot\TestCases.csv | ForEach-Object {
# Convert row (PSCustomObject) to hashtable.
$hashTable = #{}
$_.PSObject.Properties | ForEach-Object { $hashTable[ $_.Name ] = $_.Value }
# Implicit output that will be captured in array $script:testCases
$hashTable
}
}
Describe "Checking services" {
It "check <ServiceName> is running on <ComputerName>" -TestCases $script:testCases {
( get-service -computername $ComputerName -name $ServiceName ).status | Should -be "Running"
}
}
Note: While not strictly necessary I have put the code that reads the test cases into the BeforeDiscovery section, as suggested by the docs. This makes our intentions clear.

Understanding Powershell: example - Convert JSON to CSV

I've read several posts (like Convert JSON to CSV using PowerShell) regarding using PowerShell to CSV. I have also read that it is relatively poor form to use the pipe syntax in scripts -- that it's really meant for command line and can create a hassle for developers to maintain over time.
Using this sample JSON file...
[
{
"a": "Value 1",
"b": 20,
"g": "Arizona"
},
{
"a": "Value 2",
"b": 40
},
{
"a": "Value 3"
},
{
"a": "Value 4",
"b": 60
}
]
...this code...
((Get-Content -Path $pathToInputFile -Raw) | ConvertFrom-Json) | Export-CSV $pathToOutputFile -NoTypeInformation
...creates a file containing CSV as expected.
"a","b","g"
"Value 1","20","Arizona"
"Value 2","40",
"Value 3",,
"Value 4","60",
This code...
$content = Get-Content -Path $pathToInputFile -Raw
$psObj = ConvertFrom-Json -InputObject $content
Export-Csv -InputObject $psObj -LiteralPath $pathToOutputFile -NoTypeInformation
...creates a file containing nonsense:
"Count","Length","LongLength","Rank","SyncRoot","IsReadOnly","IsFixedSize","IsSynchronized"
"4","4","4","1","System.Object[]","False","True","False"
It looks like maybe an object definition(?).
What is the difference? What PowerShell nuance did I miss when converting the code?
The answer to Powershell - Export a List of Objects to CSV says the problem is from the -InputObject option causing the object, not it's contents, to be sent to Export-Csv, but doesn't state how to remedy the problem without using the pipe syntax. I'm thinking something like -InputObject $psObj.contents. I realize that's not a real thing, but I Get-Members doesn't show me anything that looks like it will solve this.
This is not meant as an answer but just to give you a vague representation of what ConvertTo-Csv and Export-Csv are doing and to help you understand why -InputObject is meant to be bound from the pipeline and should not be used manually.
function ConvertTo-Csv2 {
param(
[parameter(ValueFromPipeline)]
[Object] $InputObject
)
begin {
$isFirstObject = $true
filter Normalize {
if($_ -match '"') { return $_.Replace('"','""') }
$_
}
}
process {
if($isFirstObject) {
$headers = $InputObject.PSObject.Properties.Name | Normalize
$isFirstObject = $false
[string]::Format('"{0}"', [string]::Join('","', $headers))
}
$values = foreach($value in $InputObject.PSObject.Properties.Value) {
$value | Normalize
}
[string]::Format('"{0}"', [string]::Join('","', $values))
}
}
As we can observe, there is no loop enumerating the $InputObject in the process block of this function, yet, because of how this block works, each object coming from the pipeline is processed and converted to a Csv string representation of the object.
Within a pipeline, the Process block executes once for each input object that reaches the function.
If instead, we attempt to use the InputObject parameter from the function, the object being passed as argument will be processed only once.
Calling the function at the beginning, or outside of a pipeline, executes the Process block once.
Get-Members doesn't show me anything that looks like it will solve this
Get-Member
It's because how you pass values has different behavior.
The pipeline enumerates values, it's almost like a foreach($item in $pipeline). Passing by Parameter skips that
Here I have an array of 3 letters.
$Letters = 'a'..'c'
I'm getting different types
Get-Member -InputObject $Letters
# [Object[]]
# [char]
$letters | Get-Member
Processed for each item
$letters | ForEach-Object {
"iteration: $_"
}
iteration: a
iteration: b
iteration: c
Compare to
ForEach-Object -InputObject $Letters {
"iteration: $_"
}
iteration: a b c
Detecting types
Here's a few ways to inspect objects.
using ClassExplorer
PS> ($Letters).GetType().FullName
PS> ($Letters[0]).GetType().FullName # first child
System.Object[]
System.Char
PS> $Letters.count
PS> $Letters[0].Count
3
1
$Letters.pstypenames -join ', '
$Letters[0].pstypenames -join ', '
System.Object[], System.Array, System.Object
System.Char, System.ValueType, System.Object
Tip: $null.count always returns 0. It does not throw an error.
if($neverExisted.count -gt 1) { ... }
Misc
I have also read that it is relatively poor form to use the pipe syntax in scripts
This is not true, Powershell is designed around piping objects.
Tip: $null.count always returns 0. It does not throw an error.
Maybe They were talking about
Example2: slow operations
Some cases when you need something fast, the overhead to Foreach-Object over a foreach can be an issue. It makes it so you have to use some extra syntax.
If you really need speed, you should probably be calling dotnet methods anyway.
Example1: Piping when you could use a parameter
I'm guessing they meant piping a variable in cases where you can pass parameters?
$text = "hi-world"
# then
$text | Write-Host
# vs
Write-Host -InputObject $Text

value fed by Pipeline versus Argument

I have a simple Powershell script that I want to accept string values via Pipeline/Argument or plain just run the script.
#myscript
[CmdletBinding()]
Param(
[Parameter(ValueFromPipelineByPropertyName,ValueFromPipeline)][string[]]$dnshostname
)
Begin {
If(-Not $dnshostname){
Write-Host "I AM HERE"
$raw = Get-ADComputer -Filter * -Property *
$raw | ForEach {
If (Test-Connection -Delay 15 -ComputerName $_.IPv4Address -Count 1 -ErrorAction SilentlyContinue){
$dnshostname += $_.DNSHostname
}
}
}
}
Process {
...
...
}
When I run this script like this: "myserver" | ./myscript.ps1
I get the "I AM HERE" message which means the $dnshostname variable is empty, but wasn't it fed from the pipeline?
If I run the same script as: ./myscript.ps1 -dnshostname "myserver", I do not get the "I AM HERE" message which mean it correctly determined that the $dnshostname variable is not empty.
I am sure I am missing something very fundamental here. May i get some guidance pretty please as to why the value passed via the pipeline triggers my IF statement?
Thank you!

How can I get the workflow definition of a logic app as JSON?

How can I get the workflow definition of a logic app as JSON?
I can create a new Logic App from a JSON definition file using the New-AzLogicApp command
But I can't see how to reverse the process, i.e. get the JSON definition of an existing Logic App.
I've tried the Get-AzLogicApp which returns a Workflow object.
But I'm blocked on the last step, i.e. from the Workflow back to an actual JSON file
If you want to get the definition of the logic app, try the command as below.
$logicapp = Get-AzResource -ResourceGroupName <ResourceGroupName> -ResourceType Microsoft.Logic/workflows -ResourceName "<logic app name>"
$logicapp.properties.definition | ConvertTo-Json
If you want to get it as a .json file, just change the second line as below.
$logicapp.properties.definition | ConvertTo-Json | Out-File "C:\Users\joyw\Desktop\logic.json"
Update:
You could specify the -Depth parameter of ConvertTo-Json with 3, if you want more levels of contained objects are included in the JSON representation, you can also specify it with other values.
-Depth
Specifies how many levels of contained objects are included in the JSON representation. The default value is 2.
$logicapp.properties.definition | ConvertTo-Json -Depth 3
You can use the REST API to get the details.
REST Api Documentation
Or can you try using the Get-AzLogicApp | ConvertFrom-Json | ConvertTo-Json and see if that helps
I've drilled down for a project in a client u need to do this:
1 )Get-AzLogicApp
$lapp.Definition.ToString()-> this is the entire definition of the logicapp
2) Save the definition to a file
3) Use New-AzLogicApp or Set-AzLogicApp with -DefinitionFilePath pointing to that file
$a= New-AzLogicApp -ResourceGroupName $rg -name $name1 -location $loc -DefinitionFilePath $fileName1 -ParameterFilePath $parm1
$a.Parameters.Count
*for Parameters i use this content in a file
{
"$connections": {
"value": {
"office365": {
"connectionId": "/subscriptions/SUBS-DEPLOY/resourceGroups/RG-DEPLOY/providers/Microsoft.Web/connections/office365",
"connectionName": "office365",
"id": "/subscriptions/SUBS-DEPLOY/providers/Microsoft.Web/locations/westeurope/managedApis/office365"
},
"sharepointonline": {
"connectionId": "/subscriptions/SUBS-DEPLOY/resourceGroups/RG-DEPLOY/providers/Microsoft.Web/connections/sharepointonline",
"connectionName": "sharepointonline",
"id": "/subscriptions/SUBS-DEPLOY/providers/Microsoft.Web/locations/westeurope/managedApis/sharepointonline"
}
}
}
}
replace SUBS-DEPLOY with the subscription id and RG-DEPLOY with resource group name and all good.
Anything just buzz: stationsolutions_at_gmail.com
Hope it helps
Here's the code ..
function Get-LogicApp($resourceGroupName ,$location,$name)
{
Write-Host " Get LogicApp Definition $name"
$lapp = Get-AzLogicApp -ResourceGroupName $resourceGroupName -Name $name
$o= $lapp.Definition.ToString()
$fileName = "..\logicapps\" + $name + ".logicapp.json"
$o | Out-File -FilePath $fileName
$parms = "..\logicapps\templates\parms.json"
$fileName = "..\logicapps\" + $name + ".logicapp.parms.json"
Copy-Item -Path $parms $fileName
Write-Host " LogicApp Definition $resourceGroupName > $fileName"
}