I've obtained a file path to an xml-resource, by interrogating task scheduler arguments.
I'd like to pipe these files paths to [xml], to return data using XPath.
Online I see accelerators and variables are used, eg
$xml = [XML](Get-Content .\Test.xml)
tried piping to convert-to-xml, but that's an XML object containing the filepath, so I need to convert to [xml] - hoping to do this in the pipeline, potentially for > 1 xmldocument
Is it possible to pipe to [typeaccelerators] ?
Should I be piping to New-Object, or Tee-Variable, as required?
I hope to eventually be able to construct a one-liner to interrogate several nodes (eg LastRan, LastResult)
currently I have this, which only works for one
([xml](Get-Content ((Get-ScheduledTask -TaskPath *mytask* | select -First 1).Actions.Arguments | % {$_.Split('"')[-2]}))).MyDocument.LastRan
returns the value of LastRan, from MyDocument node.
Thanks in advance 👍
If you want to take pipeline input you need to make a function and set the parameter attribute ValueFromPipeline
Function Convert-XML {
Param(
[Parameter(ValueFromPipeline)]$xml
)
process{
[xml]$xml
}
}
Then you could take the content of an xml file (all at once, not line by line)
Get-Content .\Test.xml -Raw | Convert-XML
Of course to get your one liner you'd probably want to add the logic for that in the function. However this is how you'd handle pipeline input.
Related
I have a Commands.csv file like:
| Command |
| -----------------------------------------------|
|(Get-FileHash C:\Users\UserA\Desktop\File1).Hash|
|(Get-FileHash C:\Users\UserA\Desktop\File2).Hash|
|(Get-FileHash C:\Users\UserA\Desktop\File3).Hash|
Header name is "Command"
My idea is to:
Use ForEach ($line in Get-Content C:\Users\UserA\Desktop\Commands.csv ) {echo $line}
Execute $line one by one via powershell.exe, then output a result to a new .csv file - "result.csv"
Can you give me some directions and suggestions to implement this idea? Thanks!
Important:
Only use the technique below with input files you either fully control or implicitly trust to not contain malicious commands.
To execute arbitrary PowerShell statements stored in strings, you can use Invoke-Expression, but note that it should typically be avoided, as there are usually better alternatives - see this answer.
There are advanced techniques that let you analyze the statements before executing them and/or let you use a separate runspace with a restrictive language mode that limits what kinds of statements are allowed to execute, but that is beyond the scope of this answer.
Given that your input file is a .csv file with a Commands column, import it with Import-Csv and access the .Commands property on the resulting objects.
Use Get-Content only if your input file is a plain-text file without a header row, in which case the extension should really be .txt. (If it has a header row but there's only one column, you could get away with Get-Content Commands.csv | Select-Object -Skip 1 | ...). If that is the case, use $_ instead of $_.Commands below.
To also use the CSV format for the output file, all commands must produce objects of the same type or at least with the same set of properties. The sample commands in your question output strings (the value of the .Hash property), which cannot meaningfully be passed to Export-Csv directly, so a [pscustomobject] wrapper with a Result property is used, which will result in a CSV file with a single column named Result.
Import-Csv Commands.csv |
ForEach-Object {
[pscustomobject] #{
# !! SEE CAVEAT AT THE TOP.
Result = Invoke-Expression $_.Commands
}
} |
Export-Csv -NoTypeInformation Results.csv
so I have updated my PowerShell (downloaded new one from MS Store) to version 7.2.4.0.
And the I wantto convert Markdown file to HTML, so I would import the 2 modules based on this description (https://social.technet.microsoft.com/wiki/contents/articles/30591.convert-markdown-to-html-using-powershell.aspx), so I would do something like:
Import-Module C:\Users\user\Documents\WindowsPowerShell\Modules\powershellMarkdown.dll
Import-Module C:\Users\mitus\Documents\WindowsPowerShell\Modules\MarkdownSharp.dll
Now I want to:
$md = ConvertFrom-Markdown C:\test\test.md
It results with:
ConvertFrom-Markdown: The given key 'test.md' was not present in the dictionary.
So I try the following:
$md = ConvertFrom-Markdown -Path .\test.md
And the POwerShell now says:
ConvertFrom-Markdown: A parameter cannot be found that matches parameter name 'Path'.
Either way, Its not working. Why PowerShell does not know parameter -Path? How do I make conversion from Markup to HTML working? Why is this shit not working at all even if imported those two .dll files? What am I doing wrong?
Thank you for your help!
The ConvertFrom-* commands generally don't support file input directly (with exceptions). When there is no related Import-* command, you have to use Get-Content to first read the file and then pass it to the ConvertFrom-* command like so:
$md = Get-Content C:\test\test.md -Raw | ConvertFrom-Markdown
Make sure to use the -Raw parameter so the ConvertFrom-Markdown command receives a single input string instead of an array of lines, which could be parsed incorrectly.
If you want to inspect the content of the .md file first, you may store it in a separate variable:
$text = Get-Content C:\test\test.md -Raw
# Now you may optionally log the content of the .md file
Write-Host $text
$md = $text | ConvertFrom-Markdown
I want to chain cmdlets together in a pipeline starting with a Import-Csv file of NN records to process, at the end I want to write out a result file for all NN records and the accumulated results of processing. Along the way I may want to add data to the pipeline that says don't process this record any further, but still pass it along in the pipeline.
I envisioned this looking like this:
Import-Csv input | step-1 -env DEV | step-2 | step-3 | Export-Csv result
Each cmdlet being written to pipe all $_ properties for each record and keep them in the pipeline.
What's the best way to read some sort of "CanContinue" property and if it is false short circuit processing and just pass it along to the next cmdlet in the pipeline without processing?
I'm assuming you don't want this flag to be part of the resulting CSV. The way I see it you can use 2 similar approaches: add the flag to the object being processed before returning it, or wrap the object in another object (which contains the flag and another property that holds the original object).
For now I'm going to explore the first option where you add the property. I'm going to change it from a positive CanContinue to a negative, DoNotProcess so that its non-existence can coalesce to $false (do continue).
To do this, in each of your processing functions, just check the value of the DoNotContinue property. If it's $true, return the original object you received without additional processing.
If it's $false, you can do your processing and if the conditions are met that processing should stop, you force add the property with $true:
Process {
# processing done
$MyObj |
Add-Member -NotePropertyName DoNotContinue -NotePropertyValue $true -Force -PassThru
}
All such commands can handle it this way.
Now when it comes to the end of the pipeline, you don't want this property written to the CSV. For that, strip it off with Select-Object:
Import-Csv input |
step-1 -env DEV |
step-2 |
step-3 |
Select-Object -Property * -ExcludeProperty DoNotContinue |
Export-Csv result
Bonus:
Refer back to my answer on another question of yours, and instead of manually checking for the property, define it as a parameter in your processing cmdlets with [Parameter(ValueFromPipelineByPropertyName)] like so:
param(
[Parameter(ValueFromPipelineByPropertyName)]
[Switch]
$DoNotContinue
)
Why do that? Because you let PowerShell process the object for you and you only have to check the value of $DoNotContinue. It also allows you to override that value for a particular call.
(in this case, I'd rename it to $DoNotProcess or $SkipProcessing or something; remember you can also use [Alias()] if you want it to have multiple names)
I have a powershell script for which I expect to pass quite few arguments in the command line. Having many arguments is not a problem since it is configured as a scheduled task but I'd like to make things easier for support people so that if ever they need to run the script from command line they have less things to type.
So, I am considering the option to have the arguments in a text file, either using Java-style properties file with key/value pairs in each line, or a simple XML file that would have one element per argument with a name element and a value element.
arg1=value1
arg2=value2
I'm interested with the views of PowerShell experts on the two options and also if this is the right thing to do.
Thanks
If you want to use the ini file approach, you can easily parse the ini data like so:
$ini = ConvertFrom-StringData (Get-Content .\args.ini -raw)
$ini.arg1
The one downside to this approach is that all the arg types are string this works fine for strings and even numbers AFAICT. Where it falls down is with [switch] args. Passing True or $true doesn't have the desired effect.
Another approach is to put the args in a .ps1 file and execute it to get back a hashtable with all the args that you can then splat e.g:
-- args.ps1 --
#{
ComputerName = 'localhost'
Name = 'foo'
ThrottleLimit = 50
EnableNetworkAccess = $true
}
$cmdArgs = .\args.ps1
$s = New-PSSession #cmdArgs
To extend Keith's answer slightly, it's possible to set actual variables based on the values from the config file e.g.
$ini = ConvertFrom-StringData (Get-Content .\args.ini -raw)
$ini.keys | foreach-object { set-variable -name $_ -value $ini.Item($_) }
So a file containing
a=1
b=2
c=3
When processed with above code should lead to three variables being created:
$a = 1, $b = 2, $c = 3
I expect this would work for simple string values, but i'm fairly sure it would not be 'smart' enough to convert, say myArray = 1,2,3,4,5 into an actual array.
I have a script that writes output to a log file and also the console. I am running the command Add-WindowsFeatures... I want to take output of this command and pipe it to my script. Is it possible?
Absolutely. You just need to include the CmdletBinding attribute on your param statement. Then, add an attribute to one of your parameters which details the way the pipeline input binds to the parameter. For instance put this in c:\temp\get-extension.ps1:
[CmdletBinding()]
Param(
[parameter(Mandatory=$true,
ValueFromPipeline=$true)][System.IO.FileInfo[]]$file
)
process {
$file.Extension
}
Then, you can do this:
dir -File| C:\temp\get-extension.ps1
updating to address the latest comment: I'm guessing that setting the parameter type to [object[]]$stuff rather than [fileinfo[]] and putting
$stuff | out-file c:\logs\logfile.txt #or wherever you want
in the process block will get you close.