issues PowerShell POST request with body - powershell

I have a bit issue with powershell + invoke-webrequest
This is my body
$pram = #{
"name": "MDE.Windows",
"id": "$resourceId/extensions/MDE.Windows",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "westeurope",
"properties": {
"autoUpgradeMinorVersion": true,
"publisher": "Microsoft.Azure.AzureDefenderForServers",
"type": "MDE.Windows",
"typeHandlerVersion": "1.0",
"settings": {
"azureResourceId": "$resourceId",
"defenderForServersWorkspaceId": "$subscriptionId",
"vNextEnabled": "true",
"forceReOnboarding": true,
"provisionedBy": "Manual"
},
"protectedSettings": {
"defenderForEndpointOnboardingScript": "$defenderForEndpointOnboardingScript"
}
}
}
I don't get watch wrong with my body because by looking examples from google this should be right but it still ouputs red
I have tried also with #"{ }"#, #'{ }'#, { }, "{ }" but no matter what I do it is more or less red.

I think your mistaking powershell hash tables for json. Normally you would create a hashtable using powershell syntax, then convert that object into Json. eg
$pram = #{
name= "MDE.Windows";
id= "$resourceId/extensions/MDE.Windows";
} | ConvertTo-Json
You can now pass the json encoded value of $parm to Invoke-WebRequest.
The other option is to create a String and write the JSON yourself:
$paramString = '{"id": "/extensions/MDE.Windows", "name": "MDE.Windows"}'
(but the first solution is probably the solution your looking for).

Related

set blob name of azure function with powershell

I am trying to set (change) the filename of a blob within a Azure function with powershell.
It is working great by function.json
{
"bindings": [
{
"name": "InputBlob",
"type": "blobTrigger",
"direction": "in",
"path": "container-src/{name}.{ext}",
"connection": "stacqadbdev_STORAGE"
},
{
"name": "OutputBlob",
"type": "blob",
"direction": "out",
"path": "container-dest/{name}",
"connection": "stacqadbdev_STORAGE"
}
]
}
which is just 'copying' the blob's name to another container.
As soon as I want to change to destinations blobname to something which is calculated within my function I am failing.
I tried to set
{
"bindings": [
{
"name": "InputBlob",
"type": "blobTrigger",
"direction": "in",
"path": "container-src/{name}",
"connection": "stacqadbdev_STORAGE"
},
{
"name": "OutputBlob",
"type": "blob",
"direction": "out",
"path": "container-dest/{newname}",
"connection": "stacqadbdev_STORAGE"
}
]
}
and assembling $newname in my run.ps1
Doing Push-OutputBinding -Name $OutputBlob -Value $Blob has the issue that it wants to have the Byte[]-Array which has no properties for its name or so.
So, the bindings configuration is just taking the parameters given by input.
Passing something else than the Byte[]-Array is not possible...
That's why I always get
Executed 'Functions.CreateVaultEntry' (Failed, Id=775e61ce-b001-4278-a8d8-1c90ea63c062, Duration=91ms)
System.Private.CoreLib: Exception while executing function: Functions.CreateVaultEntry.
Microsoft.Azure.WebJobs.Host: No value for named parameter 'newfilename'.
I just want to take the inputBlob, change it's name and write it as outputBlob with another name.
What is newname ?
When your function is triggered, and in the trigger, you are binding to {name}, then your other bindings will have knowledge of {name}, but {newname} does not make sense.
Perhaps what you want is container-dest/{name}_output
More info.

Have PowerShell pass results to Pentaho

I have a PowerShell script that processes a json string. My goal is to have this pass a resultset to Pentaho so I can process it and put it in a database table.
My PowerShell script works as expected outside of Pentaho. I can parse the files and get the information I need without any issues. It's when I try to pass those values is when Pentaho returns goofy results.
Here is my script
$scriptMode = 'GetFileInfo'
$json = '{
"building": [
{
"buildingname": "NAPA Auto Parts",
"files": [{
"sheets": [{
"name": "BATTERY",
"results": [{
"filename": "BATTERY - 1679568711.xlsx",
"sku": "1679568711"
}
]
}
],
"name": "2.15.19.xlsx",
"status": "processed",
"fileId": "c586bba6-4382-42c4-9c29-bffc6f7fe0b6"
}, {
"name": "Oct-Nov 2018 11.30.18.xlsx",
"errors": ["Unknown sheet name: TOILET PLUNGER"],
"status": "failed",
"fileId": "afa7c43f-26dc-421c-b2eb-45ad1e899c42"
}
]
},
{
"buildingname": "O''Reily Auto Parts",
"files": [{
"sheets": [{
"name": "ALTERNATOR",
"results": [{
"filename": "ALTERNATOR - 6.3.19 1629453444.xlsx",
"sku": "1629453444"
}
]
}, {
"name": "OIL FILTER",
"results": [{
"filename": "OIL FILTER - 6.3.19 1629453444.xlsx",
"sku": "1629453444"
}
]
}
],
"name": "6.3.19.xlsx",
"status": "processed",
"fileId": "647089fe-9592-4e2b-984f-831c4acd4d9c"
}
]
}
]
}'
$psdata = ConvertFrom-Json -InputObject $json
IF ($scriptMode -eq "GetFileInfo") {
$psdata.building | foreach-Object {
foreach ($File in $_.files)
{
[PSCustomObject]#{
BuildingName = $_.buildingname
FileName = $File.name
fileId = $File.fileId
Status = $File.status}
}
}
}
ElseIF ($scriptMode -eq "GetErrorInfo") {
$psdata.building | foreach-Object {
foreach ($File in $_.files)
{
[PSCustomObject]#{
BuildingName = $_.buildingname
Errors = $File.errors
SheetName = $File.sheets.name
fileId = $File.fileId} | Where-Object {$_.errors -ne $null}
}
}
}
And here's how I have my transformation setup. I have a table input query that will set the run command for PowerShell based on what I want the script to do (either get file info or get error info).
Then I have the "Execute a process" step run the PowerShell command
This is what is returned in Pentaho vs what PowerShell returns
I'm expecting the results to be returned exactly as PowerShell returns them. I'm hoping I can accomplish this without exporting the data to another format. We have had nothing but issues with the Json Input step in Pentaho, so we chose PowerShell over the "Modified Javascript Value" step in Pentaho.
Any idea how I can get this to return a result set (like a SQL query would return) back to Pentaho?
Most likely your result set is returning the entire thing, just not "tabled" as you expected, it's probably returning the entire table all summed up in one long text format, but still having all the line breaks / column breaks.
Try using Split steps in your pentaho flow to work on the returned String. First off, try using a "Split field to rows" with the delimiter as "${line.separator}".
From there all you to do is pretty much split the whole thing until it is a table in pentaho.

How to append web request results in a while loop

I need to call an API and loop through the various pages of results that are returned and append them all to one object.
I've tried the code below. Generally += works when appending to a powershell object, but no luck this time.
Note: URI and Get are both functions that are defined elsewhere. They work as expected elsewhere in the code.
$min=1
$max=2
while ($min -le $max){
$url= URI "tasks?page=$min"
$x=Get $url
if($min=1){
$response=$x
}
else{
$response+=$x
}
$min=$min+1
}
sample response (converted to json):
"value": [
{
"task_id": 17709655,
"project_id": 1928619,
"start_date": "2019-04-11",
"end_date": "2019-11-29",
"start_time": null,
"hours": 1.5,
"people_id": 17083963,
"status": 2,
"priority": 0,
"name": "",
"notes": "",
"repeat_state": 0,
"repeat_end_date": null,
"created_by": 331791,
"modified_by": 0,
"created": "2019-04-12 00:39:30.162",
"modified": "2019-04-12 00:39:30.162",
"ext_calendar_id": null,
"ext_calendar_event_id": null,
"ext_calendar_recur_id": null
},
{
"task_id": 17697564,
"project_id": 1928613,
"start_date": "2019-10-08",
"end_date": "2019-10-08",
"start_time": null,
"hours": 8,
"people_id": 17083966,
"status": 2,
"priority": 0,
"name": "",
"notes": "",
"repeat_state": 0,
"repeat_end_date": null,
"created_by": 327507,
"modified_by": 0,
"created": "2019-04-11 16:10:22.969",
"modified": "2019-04-11 16:10:22.969",
"ext_calendar_id": null,
"ext_calendar_event_id": null,
"ext_calendar_recur_id": null
}
],
"Count": 2
}```
Assuming you want the output to be an array, I'd write your code like this:
$min=1
$max=2
$response = foreach ($Page in $min..$max) {
$url = URI "tasks?page=$Page"
Get $url
}
This is the generally preferred method, because both Strings and Arrays have fixed lengths in .Net and therefore PowerShell.
Here, $response[0] should be the first response, $response[1] the second, etc.
If the above doesn't work for you, then my first guess would be that the output of Get isn't a string.
If you're expecting $response to be a single valid JSON string containing all the responses, then my response is "JSON doesn't work that way." You'll have to parse each JSON response to objects (hint: ConvertFrom-Json) combine them, and then possibly convert them back to JSON (ConvertTo-Json). Note that .Net's native dialect of JSON doesn't match the rest of the Internet's dialect of JSON, particularly with dates (though it looks like your dates here are strings). You may want to use JSON.Net, which I believe does match the common Internet dialect.
You may be able to combine $response like this:
$CombinedResponse = '[' + ($response -join ',') + ']'
But I don't know how well that's going to work if you then try to parse that as JSON.

How to name the file created in blob storage from a Powershell Azure Function?

Here is the function.json I have. The default for the path property is {rand-guid} which works.
I want name the file, I tried {filename} in the json and in the run.ps1 tried setting $env:filename and $filename, it did not work.
Can this be done?
{
"bindings": [
{
"name": "myTimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 * 7 * * *"
},
{
"type": "blob",
"name": "outputBlob",
"path": "outcontainer/{filename}",
"connection": "testfunctionpsta0a5_STORAGE",
"direction": "out"
}
],
"disabled": false
}
Unfortunately the output binding is not that simple. We need to pass a json object to the output binding to get the values.
Have a look at the examples on https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings#advanced-binding-at-runtime-imperative-binding.
// function.json
{
"name": "$return",
"type": "blob",
"direction": "out",
"path": "output-container/{id}"
}
// C# example: use method return value for output binding
public static string Run(WorkItem input, TraceWriter log)
{
string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
log.Info($"C# script processed queue message. Item={json}");
return json;
}
Should be straightforward to do similar in Powershell.
Could you pass the name of your file as part of your trigger payload? If yes, this SO thread may help.

Azure nested template deployment: Using template element (not templateLink) with PowerShell

In an attempt to make life easier (in the long run), I'm trying to use properties.template, as opposed to the well documented properties.templateLink. The former has very little documentation by passing the contents of child.json template file into the parent.json template, as a template' parameter.
From the MS documentation for Microsoft.Resources/deployments:
The template content. You use this element when you want to pass the template syntax directly in the request rather than link to an existing template. It can be a JObject or well-formed JSON string. Use either the templateLink property or the template property, but not both.
In my parent template, I am declaring the parameter childTemplates and referencing it in properties.template:
"parameters": {
"childTemplates": {
"type": "object",
"metadata": {
"description": "Child template"
}
}
}
other stuff...
"resources": [
{
"name": "[concat('linkedTemplate-VM-Net-',copyIndex(1))]",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2017-06-01",
"dependsOn": [],
"copy": {
"name": "interate",
"count": "[parameters('vmQty')]"
},
"properties": {
"mode": "Incremental",
"template": "[parameters('childTemplates')]",
"parameters": {
"sharedVariables": { "value": "[variables('sharedVariables')]" },
"sharedTemplate": { "value": "[variables('sharedTemplate')]" },
"artifactsLocationSasToken": { "value": "[parameters('artifactsLocationSasToken')]" },
"adminPassword": { "value": "[parameters('adminPassword')]" },
"copyIndexValue": { "value": "[copyIndex(1)]" }
},
"debugSetting": {
"detailLevel": "both"
}
}
}
],
I then pass the child template to New-AzureRmResourceGroupDeployment -TemplateParameterObject to deploy the parent template:
$TemplateFileLocation = "C:\Temp\templates\parent.json"
$JsonChildTemplate = Get-Content -Raw (Join-Path ($TemplateFileLocation | Split-Path -Parent) "nestedtemplates\child.json") | ConvertFrom-Json
$TemplateParameters = #{
childTemplates = $JsonChildTemplate
...Other parameters...
}
New-AzureRmResourceGroupDeployment -TemplateParameterObject $TemplateParameters
This produces the following error:
Code : InvalidTemplate
Message : The nested deployment 'linkedTemplate-VM-Net-1' failed validation: 'Required property '$schema' not found in JSON. Path 'properties.template'.'.
Target :
Details :
If I look at $JsonChildTemplate, it gives me:
$schema : https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#
contentVersion : 1.0.0.0
parameters : #{sharedVariables=; sharedTemplate=; vhdStorageAccountName=; artifactsLocationSasToken=; adminPassword=; copyIndexValue=}
variables : #{seqNo=[padleft(add(parameters('copyIndexValue'),3),3,'0')]; nicName=[concat('NIC-',parameters('sharedVariables').role,'-', variables('seqNo'),'-01')];
subnetRef=[parameters('sharedVariables').network.subnetRef]; ipConfigName=[concat('ipconfig-', variables('seqNo'))]}
resources : {#{apiVersion=2016-03-30; type=Microsoft.Network/networkInterfaces; name=[variables('nicName')]; location=[resourceGroup().location]; tags=; dependsOn=System.Object[];
properties=}}
outputs : #{nicObject=; vmPrivateIp=; vmNameSuffix=; vmPrivateIpArray=}
To me, it looks like the $schema is there.
I have also tried removing | ConvertFrom-Json with the same error.
Above, I am showing the latest API version, but I have tried with others such as 2016-09-01, just in case there's a bug.
In my search for a solution, I found this issue on GitHub. The recomendation is to remove $schema and contentVersion, although this flies in the face of the error. I tried this with the following:
Function Get-ChildTemplate
{
$TemplateFileLocation = "C:\Temp\templates\nestedtemplates\child.json"
$json = Get-Content -Raw -Path $TemplateFileLocation | ConvertFrom-Json
$NewJson = #()
$NewJson += $json.parameters
$NewJson += $json.variables
$NewJson += $json.resources
$NewJson += $json.outputs
Return $NewJson | ConvertTo-Json
}
$JsonChildTemplate = Get-ChildTemplate
$TemplateParameters = #{
childTemplates = $JsonChildTemplate
...Other parameters...
}
$JsonChildTemplate returns:
[
{
"sharedVariables": {
"type": "object",
"metadata": "#{description=Object of variables from master template}"
}...
My guess is that I have done something wrong passing child.json's contents to New-AzureRmResourceGroupDeployment. That or it's not actually possible to do what I'm trying to do.
P.S.
get-command New-AzureRmResourceGroupDeployment
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet New-AzureRmResourceGroupDeployment 4.1.0 AzureRM.Resources
First of all, what you are doing makes 0 sense what so ever, that being said, lets try to help you.
Try splatting. so do New-AzureRmResourceGroupDeployment ... #TemplateParameters instead of what you are doing. (no idea, but somehow it works better in my experience)
If that doesn't work directly try simplifying you nested template to the bare minimum and see if it works if it does, check if your nested template is fine.
Try creating a deployment with -Debug switch and see where that goes.
Try the same deployment using Azure Cli (maybe it converts json to input object in a proper way)
Skip items 1-4 and do it the proper way. I would advice never do preprocessing\in flight generation of ARM Templates. They have enough of features already to accomplish anything if you are smart hacky enough. I have no idea what you are trying to achieve but I can bet my life on it you don't need that monstrosity you are trying to create
small template example:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"resources": []
}
EDIT: I dug a bit more and found a solution. one way to do it would be using the json() function of the arm template that accepts a string and converts to valid json.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"inp": {
"type": "string"
}
},
"resources": [
{
"name": "NestedDeployment1",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2015-01-01",
"properties": {
"mode": "Incremental",
"template": "[json(parameters('inp'))]",
"parameters": {}
}
}
]
}
To deploy use something like this:
New-AzureRmResourceGroupDeployment ... -inp ((get-content path\to.json -raw) -replace '\s','')
# we minify the string so it gets properly converted to json
This is a bit of a hack, but the problem lies with how powershell converts your input to what it passes to the template, and you cannot really control that.
Another way to do that: (if you need output you can add another parameter)
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"input-param": {
"type": "object"
},
"input-resource": {
"type": "array"
}
},
"resources": [
{
"name": "NestedDeployment1",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2015-01-01",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": "[parameters('input-param')]",
"resource": "[parameters('input-resource')]"
},
"parameters": {}
}
}
]
}
and deploying like so:
New-AzureRmResourceGroupDeployment -ResourceGroupName zzz -TemplateFile path\to.json -input-param #{...} -input-resource #(...)
ps. don't mind walter, each time he says something can't be done or is impossible it actually is possible.