Summary
I would like to use PowerShell to retrieve an image from the Windows clipboard, and save it to a file.
When I copy an image to the clipboard, from a web browser for example, I am able to retrieve a System.Windows.Interop.InteropBitmap object.
Once I have this object, I need to save the object to disk.
Question: How can I accomplish this?
Add-Type -AssemblyName PresentationCore
$DataObject = [System.Windows.Clipboard]::GetDataObject()
$DataObject | Get-Member
All credits goes to this answer, this is merely a PowerShell adaptation of that code.
The code consists in 2 functions, one that outputs the InteropBitmap instance of the image in our clipboard and the other function that receives this object from the pipeline and outputs it to a file. The Set-ClipboardImage function can output using 3 different Encoders, you can add more if needed. See System.Windows.Media.Imaging Namespace for details.
These functions are tested and compatible with Windows PowerShell 5.1 and PowerShell Core. I do not know if this can or will work in prior versions.
using namespace System.Windows
using namespace System.Windows.Media.Imaging
using namespace System.IO
using namespace System.Windows.Interop
Add-Type -AssemblyName PresentationCore
function Get-ClipboardImage {
$image = [Clipboard]::GetDataObject().GetImage()
do {
Start-Sleep -Milliseconds 200
$downloading = $image.IsDownloading
} while($downloading)
$image
}
function Set-ClipboardImage {
[cmdletbinding()]
param(
[Parameter(Mandatory)]
[ValidateScript({
if(Test-Path (Split-Path $_)) {
return $true
}
throw [ArgumentException]::new(
'Destination folder does not exist.', 'Destination'
)
})]
[string] $Destination,
[Parameter(Mandatory, ValueFromPipeline, DontShow)]
[InteropBitmap] $InputObject,
[Parameter()]
[ValidateSet('Png', 'Bmp', 'Jpeg')]
[string] $Encoding = 'Png'
)
end {
try {
$Destination = $PSCmdlet.GetUnresolvedProviderPathFromPSPath($Destination)
$encoder = switch($Encoding) {
Png { [PngBitmapEncoder]::new(); break }
Bmp { [BmpBitmapEncoder]::new(); break }
Jpeg { [JpegBitmapEncoder]::new() }
}
$fs = [File]::Create($Destination)
$encoder.Frames.Add([BitmapFrame]::Create($InputObject))
$encoder.Save($fs)
}
catch {
$PSCmdlet.WriteError($_)
}
finally {
$fs.ForEach('Dispose')
}
}
}
Get-ClipboardImage | Set-ClipboardImage -Destination .\test.jpg -Encoding Jpeg
Related
I have this function I found online which updates an IIS web.config XML file based on values I am feeding it from CSV files. I would like to prevent the output of every XML entry modification from being printed when the function is executed. I have tried numerous times to place to utilize Out-Null but I cannot seem to locate which portion of this XML update function is causing the output.
I would like to remove all XML update output like shown below after the line "Setting web.config appSettings for D:\Inetpub\WWWRoot\test". The output seems to be the key/value pair the function is reading from the CSV file and updating the web.config file with.
Setting web.config appSettings for D:\Inetpub\WWWRoot\test
#text
----- allowFrame TRUE
authenticationType SSO
cacheProviderType Shared
Here is the function I am utilizing:
function Set-Webconfig-AppSettings
{
param (
# Physical path for the IIS Endpoint on the machine without the "web.config" part.
# Example: 'D:\inetpub\wwwroot\cmweb510\'
[parameter(mandatory = $true)]
[ValidateNotNullOrEmpty()]
[String] $path,
# web.config key that you want to create or change
[parameter(mandatory = $true)]
[ValidateNotNullOrEmpty()]
[String] $key,
# Value of the key you want to create or change
[parameter(mandatory = $true)]
[ValidateNotNullOrEmpty()]
[String] $value
)
Write-Host "Setting web.config appSettings for $path" -ForegroundColor DarkCyan
$webconfig = Join-Path $path "web.config"
[bool] $found = $false
if (Test-Path $webconfig)
{
$xml = [xml](get-content $webconfig);
$root = $xml.get_DocumentElement();
foreach ($item in $root.appSettings.add)
{
if ($item.key -eq $key)
{
$item.value = $value;
$found = $true;
}
}
if (-not $found)
{
$newElement = $xml.CreateElement("add");
$nameAtt1 = $xml.CreateAttribute("key")
$nameAtt1.psbase.value = $key;
$newElement.SetAttributeNode($nameAtt1);
$nameAtt2 = $xml.CreateAttribute("value");
$nameAtt2.psbase.value = $value;
$newElement.SetAttributeNode($nameAtt2);
$xml.configuration["appSettings"].AppendChild($newElement);
}
$xml.Save($webconfig)
}
else
{
Write-Error -Message "Error: File not found '$webconfig'"
}
}
Both SetAttributeNode and AppendChild output information so we just need to $null or [void] those out
function Set-Webconfig-AppSettings
{
param (
# Physical path for the IIS Endpoint on the machine without the "web.config" part.
# Example: 'D:\inetpub\wwwroot\cmweb510\'
[parameter(mandatory = $true)]
[ValidateNotNullOrEmpty()]
[String] $path,
# web.config key that you want to create or change
[parameter(mandatory = $true)]
[ValidateNotNullOrEmpty()]
[String] $key,
# Value of the key you want to create or change
[parameter(mandatory = $true)]
[ValidateNotNullOrEmpty()]
[String] $value
)
Write-Host "Setting web.config appSettings for $path" -ForegroundColor DarkCyan
$webconfig = Join-Path $path "web.config"
[bool] $found = $false
if (Test-Path $webconfig)
{
$xml = [xml](get-content $webconfig);
$root = $xml.get_DocumentElement();
foreach ($item in $root.appSettings.add)
{
if ($item.key -eq $key)
{
$item.value = $value;
$found = $true;
}
}
if (-not $found)
{
$newElement = $xml.CreateElement("add");
$nameAtt1 = $xml.CreateAttribute("key")
$nameAtt1.psbase.value = $key;
$null = $newElement.SetAttributeNode($nameAtt1);
$nameAtt2 = $xml.CreateAttribute("value");
$nameAtt2.psbase.value = $value;
$null = $newElement.SetAttributeNode($nameAtt2);
$null = $xml.configuration["appSettings"].AppendChild($newElement);
}
$xml.Save($webconfig)
}
else
{
Write-Error -Message "Error: File not found '$webconfig'"
}
}
In PowerShell, return values from .NET methods (in general, all output) that are neither assigned to a variable, nor sent through the pipeline to another command, nor redirected to a file
are implicitly output ("returned") from functions - even without an explicit output command such as Write-Output or a return statement.
See the bottom section of this answer for the rationale behind this implicit-output feature.
It follows that, in PowerShell:
When you call .NET methods, you need to be aware of whether or not they return a value (technically, not returning a value is indicated by a return type of void).
If they do and if you don't need this return value, you must explicitly discard (suppress) it, so that it doesn't accidentally become part of the function's output.
In your case, it is the calls to the System.Xml.XmlElement.SetAttributeNode and System.Xml.XmlElement.AppendChild methods that cause your problems, because they have return values that you're not suppressing.
Doug Maurer's helpful answer shows how you to discard these return values by assigning the method calls to $null ($null = ...).
While two other methods for suppressing output exist as well - casting to [void] and piping to Out-Null - $null = ...
is the best overall choice, as discussed in this answer.
I am able to execute powershell script on machine but not able to do it using jenkins powershell plugin
My powershell script executes another program's UI (QlikView) and then closes it it works when I execute script directly on machine. But when I do the same using jenkins powershell plugin it does not work the execution goes on for infinite time.
[CmdletBinding()]
param (
$FullQvwPath
)
function qv-SaveAndClose-QVW
{
param(
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
$QvwPath
)
try {
$qvComObject = new-object -comobject QlikTech.QlikView
$NewCreatedDoc = $qvComObject.CreateDoc()
$NewCreatedDoc.SaveAs($QvwPath)
$NewCreatedDoc.CloseDoc()
$qvComObject.Quit()
}
finally {
}
}
qv-SaveAndClose-QVW -QvwPath $FullQvwPath
I have put above code in file - QlikSaveAndClose.ps1
.\QlikSaveAndClose.ps1 -FullQvwPath 'C:\Program Files (x86)\Jenkins\Dashboard.qvw
Could it be that the file already exists? In that case SaveAs is prompting to overwrite the file. So, remove it first. Also place the Quit in the finally so the comobject is always closed, even on errors. And while we're at it, only use approved Verb-Noun names for your cmdlets:
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[String] $FullQvwPath
)
function Save-QVW
{
param (
[Parameter(Mandatory=$true,ValueFromPipeline=$true)]
[String] $Path
)
$qvComObject = New-Object -ComObject "QlikTech.QlikView"
try
{
$newCreatedDoc = $qvComObject.CreateDoc()
if (Test-Path -Path $Path)
{
Remove-Item -Path $Path -Force
}
$newCreatedDoc.SaveAs($Path)
$newCreatedDoc.CloseDoc()
}
finally
{
$qvComObject.Quit()
}
}
Save-QVW -Path $FullQvwPath
I have a few functions stored in a .psm1 file that is used by several different ps1 scripts. I created a logging function (shown below), which I use throughout these ps1 scripts. Typically, I import the module within a script by simply calling something like:
Import-Module $PSScriptRoot\Module_Name.psm1
Then within the module, I have a write logger function:
Write-Logger -Message "Insert log message here." #logParams
This function is used throughout the main script and the module itself. The splatting parameter #logParams is defined in my main .ps1 file and is not explicitly passed to the module, I suppose the variables are implicitly within the module's scope upon being imported. What I have works, but I feel like it isn't a great practice. Would it be better practice to add a param block within my module to require #logParams to be explicitly passed from the main .ps1 script? Thanks!
function Write-Logger() {
[cmdletbinding()]
Param (
[Parameter(Mandatory=$true)]
[string]$Path,
[Parameter(Mandatory=$true)]
[string]$Message,
[Parameter(Mandatory=$false)]
[string]$FileName = "Scheduled_IDX_Backup_Transcript",
[switch]$Warning,
[switch]$Error
)
# Creates new log directory if it does not exist
if (-Not (Test-Path ($path))) {
New-Item ($path) -type directory | Out-Null
}
if ($error) {
$label = "Error"
}
elseif ($warning) {
$label = "Warning"
}
else {
$label = "Normal"
}
# Mutex allows for writing to single log file from multiple runspaces
$mutex = new-object System.Threading.Mutex $false,'MutexTest'
[void]$mutex.WaitOne()
Write-Host "$(Format-LogTimeStamp) $label`: $message"
"$(Format-LogTimeStamp) $label`: $message" | Out-file "$path\$fileName.log" -encoding UTF8 -append
[void]$mutex.ReleaseMutex()
}
I have this code in ps1 that I dot source into scripts where I want to produce my own logs. The ps1 contains the simpleLogger class as well as the routine below that creates a global variable. The script can be dot sourced again and the global variable value passed to subsequently spawned jobs to maintain a single log file.
class simpleLogger
{
[string]$source
[string]$target
[string]$action
[datetime]$datetime
hidden [string]$logFile = $global:simpleLog
simpleLogger()
{
$this.datetime = Get-Date
}
simpleLogger( [string]$source, [string]$target, [string]$action )
{
$this.action = $action
$this.source = $source
$this.target = $target
$this.datetime = Get-Date
}
static [simpleLogger] log( [string]$source, [string]$target, [string]$action )
{
$newLogger = [simpleLogger]::new( [string]$source, [string]$target, [string]$action )
do {
$done = $true
try {
$newLogger | export-csv -Path $global:simpleLog -Append -NoTypeInformation
}
catch {
$done = $false
start-sleep -milliseconds $(get-random -min 1000 -max 10000)
}
} until ( $done )
return $newLogger
}
}
if( -not $LogSession ){
$global:logUser = $env:USERNAME
$global:logDir = $env:TEMP + "\"
$startLog = (get-date).tostring("MMddyyyyHHmmss")
$global:LogSessionGuid = (New-Guid)
$global:simpleLog = $script:logDir+$script:logUser+"-"+$LogSessionGuid+".log"
[simpleLogger]::new() | export-csv -Path $script:simpleLog -NoTypeInformation
$global:LogSession = [simpleLogger]::log( $script:logUser, $LogSessionGuid, 'Log init' )
}
I have a pipe function that allocates some resources in begin block that need to be disposed at the end. I've tried doing it in the end block but it's not called when function execution is aborted for example by ctrl+c.
How would I modify following code to ensure that $sw is always disposed:
function Out-UnixFile([string] $Path, [switch] $Append) {
<#
.SYNOPSIS
Sends output to a file encoded with UTF-8 without BOM with Unix line endings.
#>
begin {
$encoding = new-object System.Text.UTF8Encoding($false)
$sw = new-object System.IO.StreamWriter($Path, $Append, $encoding)
$sw.NewLine = "`n"
}
process { $sw.WriteLine($_) }
# FIXME not called on Ctrl+C
end { $sw.Close() }
}
EDIT: simplified function
Unfortunately, there is no good solution for this. Deterministic cleanup seems to be a glaring omission in PowerShell. It could be as simple as introducing a new cleanup block that is always called regardless of how the pipeline ends, but alas, even version 5 seems to offer nothing new here (it introduces classes, but without cleanup mechanics).
That said, there are some not-so-good solutions. Simplest, if you enumerate over the $input variable rather than use begin/process/end you can use try/finally:
function Out-UnixFile([string] $Path, [switch] $Append) {
<#
.SYNOPSIS
Sends output to a file encoded with UTF-8 without BOM with Unix line endings.
#>
$encoding = new-object System.Text.UTF8Encoding($false)
$sw = $null
try {
$sw = new-object System.IO.StreamWriter($Path, $Append, $encoding)
$sw.NewLine = "`n"
foreach ($line in $input) {
$sw.WriteLine($line)
}
} finally {
if ($sw) { $sw.Close() }
}
}
This has the big drawback that your function will hold up the entire pipeline until everything is available (basically the whole function is treated as a big end block), which is obviously a deal breaker if your function is intended to process lots of input.
The second approach is to stick with begin/process/end and manually process Control-C as input, since this is really the problematic bit. But by no means the only problematic bit, because you also want to handle exceptions in this case -- end is basically useless for purposes of cleanup, since it is only invoked if the entire pipeline is successfully processed. This requires an unholy mix of trap, try/finally and flags:
function Out-UnixFile([string] $Path, [switch] $Append) {
<#
.SYNOPSIS
Sends output to a file encoded with UTF-8 without BOM with Unix line endings.
#>
begin {
$old_treatcontrolcasinput = [console]::TreatControlCAsInput
[console]::TreatControlCAsInput = $true
$encoding = new-object System.Text.UTF8Encoding($false)
$sw = new-object System.IO.StreamWriter($Path, $Append, $encoding)
$sw.NewLine = "`n"
$end = {
[console]::TreatControlCAsInput = $old_treatcontrolcasinput
$sw.Close()
}
}
process {
trap {
&$end
break
}
try {
if ($break) { break }
$sw.WriteLine($_)
} finally {
if ([console]::KeyAvailable) {
$key = [console]::ReadKey($true)
if (
$key.Modifiers -band [consolemodifiers]"control" -and
$key.key -eq "c"
) {
$break = $true
}
}
}
}
end {
&$end
}
}
Verbose as it is, this is the shortest "correct" solution I can come up with. It does go through contortions to ensure the Control-C status is restored properly and we never attempt to catch an exception (because PowerShell is bad at rethrowing them); the solution could be slightly simpler if we didn't care about such niceties. I'm not even going to try to make a statement about performance. :-)
If someone has ideas on how to improve this, I'm all ears. Obviously checking for Control-C could be factored out to a function, but beyond that it seems hard to make it simpler (or at least more readable) because we're forced to use the begin/process/end mold.
It's possible to write it in C# where one can implement IDisposable - confirmed to be called by powershell in case of ctrl-c.
I'll leave the question open in case someone comes up with some way of doing it in powershell.
using System;
using System.IO;
using System.Management.Automation;
using System.Management.Automation.Internal;
using System.Text;
namespace MarcWi.PowerShell
{
[Cmdlet(VerbsData.Out, "UnixFile")]
public class OutUnixFileCommand : PSCmdlet, IDisposable
{
[Parameter(Mandatory = true, Position = 0)]
public string FileName { get; set; }
[Parameter(ValueFromPipeline = true)]
public PSObject InputObject { get; set; }
[Parameter]
public SwitchParameter Append { get; set; }
public OutUnixFileCommand()
{
InputObject = AutomationNull.Value;
}
public void Dispose()
{
if (sw != null)
{
sw.Close();
sw = null;
}
}
private StreamWriter sw;
protected override void BeginProcessing()
{
base.BeginProcessing();
var encoding = new UTF8Encoding(false);
sw = new StreamWriter(FileName, Append, encoding);
sw.NewLine = "\n";
}
protected override void ProcessRecord()
{
sw.WriteLine(InputObject);
}
protected override void EndProcessing()
{
base.EndProcessing();
Dispose();
}
}
}
The following is an implementation of "using" for PowerShell (from Solutionizing .Net). using is a reserved word in PowerShell, hence the alias PSUsing:
function Using-Object {
param (
[Parameter(Mandatory = $true)]
[Object]
$inputObject = $(throw "The parameter -inputObject is required."),
[Parameter(Mandatory = $true)]
[ScriptBlock]
$scriptBlock
)
if ($inputObject -is [string]) {
if (Test-Path $inputObject) {
[system.reflection.assembly]::LoadFrom($inputObject)
} elseif($null -ne (
new-object System.Reflection.AssemblyName($inputObject)
).GetPublicKeyToken()) {
[system.reflection.assembly]::Load($inputObject)
} else {
[system.reflection.assembly]::LoadWithPartialName($inputObject)
}
} elseif ($inputObject -is [System.IDisposable] -and $scriptBlock -ne $null) {
Try {
&$scriptBlock
} Finally {
if ($inputObject -ne $null) {
$inputObject.Dispose()
}
Get-Variable -scope script |
Where-Object {
[object]::ReferenceEquals($_.Value.PSBase, $inputObject.PSBase)
} |
Foreach-Object {
Remove-Variable $_.Name -scope script
}
}
} else {
$inputObject
}
}
New-Alias -Name PSUsing -Value Using-Object
With example usage:
psusing ($stream = new-object System.IO.StreamReader $PSHOME\types.ps1xml) {
foreach ($_ in 1..5) { $stream.ReadLine() }
}
Obviously this is really just some packaging around Jeroen's first answer but may be useful for others who find their way here.
Is it possible in Powershell to dot-source or re-use somehow script functions without it being executed? I'm trying to reuse the functions of a script, without executing the script itself. I could factor out the functions into a functions only file but I'm trying to avoid doing that.
Example dot-sourced file:
function doA
{
Write-Host "DoAMethod"
}
Write-Host "reuseme.ps1 main."
Example consuming file:
. ".\reuseme.ps1"
Write-Host "consume.ps1 main."
doA
Execution results:
reuseme.ps1 main.
consume.ps1 main.
DoAMethod
Desired result:
consume.ps1 main.
DoAMethod
You have to execute the function definitions to make them available. There is no way around it.
You could try throwing the PowerShell parser at the file and only executing function definitions and nothing else, but I guess the far easier approach would be to structure your reusable portions as modules or simply as scripts that don't do anything besides declaring functions.
For the record, a rough test script that would do exactly that:
$file = 'foo.ps1'
$tokens = #()
$errors = #()
$result = [System.Management.Automation.Language.Parser]::ParseFile($file, [ref]$tokens, [ref]$errors)
$tokens | %{$s=''; $braces = 0}{
if ($_.TokenFlags -eq 'Keyword' -and $_.Kind -eq 'Function') {
$inFunction = $true
}
if ($inFunction) { $s += $_.Text + ' ' }
if ($_.TokenFlags -eq 'ParseModeInvariant' -and $_.Kind -eq 'LCurly') {
$braces++
}
if ($_.TokenFlags -eq 'ParseModeInvariant' -and $_.Kind -eq 'RCurly') {
$braces--
if ($braces -eq 0) {
$inFunction = $false;
}
}
if (!$inFunction -and $s -ne '') {
$s
$s = ''
}
} | iex
You will have problems if functions declared in the script reference script parameters (as the parameter block of the script isn't included). And there are probably a whole host of other problems that can occur that I cannot think of right now. My best advice is still to distinguish between reusable library scripts and scripts intended to be invoked.
After your function, the line Write-Host "reuseme.ps1 main." is known as "procedure code" (i.e., it is not within the function). You can tell the script not to run this procedure code by wrapping it in an IF statement that evaluates $MyInvocation.InvocationName -ne "."
$MyInvocation.InvocationName looks at how the script was invoked and if you used the dot (.) to dot-source the script, it will ignore the procedure code. If you run/invoke the script without the dot (.) then it will execute the procedure code. Example below:
function doA
{
Write-Host "DoAMethod"
}
If ($MyInvocation.InvocationName -ne ".")
{
Write-Host "reuseme.ps1 main."
}
Thus, when you run the script normally, you will see the output. When you dot-source the script, you will NOT see the output; however, the function (but not the procedure code) will be added to the current scope.
The best way to re-use code is to put your functions in a PowerShell module. Simply create a file with all your functions and give it a .psm1 extension. You then import the module to make all your functions available. For example, reuseme.psm1:
function doA
{
Write-Host "DoAMethod"
}
Write-Host "reuseme.ps1 main."
Then, in whatever script you want to use your module of functions,
# If you're using PowerShell 2, you have to set $PSScriptRoot yourself:
# $PSScriptRoot = Split-Path -Parent -Path $MyInvocation.MyCommand.Definition
Import-Module -Name (Join-Path $PSScriptRoot reuseme.psm1 -Resolve)
doA
While looking a bit further for solutions for this issue, I came across a solution which is pretty much a followup to the hints in Aaron's answer. The intention is a bit different, but it can be used to achieve the same result.
This is what I found:
https://virtualengine.co.uk/2015/testing-private-functions-with-pester/
It needed it for some testing with Pester where I wanted to avoid changing the structure of the file before having written any tests for the logic.
It works quite well, and gives me the confidence to write some tests for the logic first, before refactoring the structure of the files so I no longer have to dot-source the functions.
Describe "SomeFunction" {
# Import the ‘SomeFunction’ function into the current scope
. (Get-FunctionDefinition –Path $scriptPath –Function SomeFunction)
It "executes the function without executing the script" {
SomeFunction | Should Be "fooBar"
}
}
And the code for Get-FunctionDefinition
#Requires -Version 3
<#
.SYNOPSIS
Retrieves a function's definition from a .ps1 file or ScriptBlock.
.DESCRIPTION
Returns a function's source definition as a Powershell ScriptBlock from an
external .ps1 file or existing ScriptBlock. This module is primarily
intended to be used to test private/nested/internal functions with Pester
by dot-sourcsing the internal function into Pester's scope.
.PARAMETER Function
The source function's name to return as a [ScriptBlock].
.PARAMETER Path
Path to a Powershell script file that contains the source function's
definition.
.PARAMETER LiteralPath
Literal path to a Powershell script file that contains the source
function's definition.
.PARAMETER ScriptBlock
A Powershell [ScriptBlock] that contains the function's definition.
.EXAMPLE
If the following functions are defined in a file named 'PrivateFunction.ps1'
function PublicFunction {
param ()
function PrivateFunction {
param ()
Write-Output 'InnerPrivate'
}
Write-Output (PrivateFunction)
}
The 'PrivateFunction' function can be tested with Pester by dot-sourcing
the required function in the either the 'Describe', 'Context' or 'It'
scopes.
Describe "PrivateFunction" {
It "tests private function" {
## Import the 'PrivateFunction' definition into the current scope.
. (Get-FunctionDefinition -Path "$here\$sut" -Function PrivateFunction)
PrivateFunction | Should BeExactly 'InnerPrivate'
}
}
.LINK
https://virtualengine.co.uk/2015/testing-private-functions-with-pester/
#>
function Get-FunctionDefinition {
[CmdletBinding(DefaultParameterSetName='Path')]
[OutputType([System.Management.Automation.ScriptBlock])]
param (
[Parameter(Position = 0,
ValueFromPipeline = $true,
ValueFromPipelineByPropertyName = $true,
ParameterSetName='Path')]
[ValidateNotNullOrEmpty()]
[Alias('PSPath','FullName')]
[System.String] $Path = (Get-Location -PSProvider FileSystem),
[Parameter(Position = 0,
ValueFromPipelineByPropertyName = $true,
ParameterSetName = 'LiteralPath')]
[ValidateNotNullOrEmpty()]
[System.String] $LiteralPath,
[Parameter(Position = 0,
ValueFromPipeline = $true,
ParameterSetName = 'ScriptBlock')]
[ValidateNotNullOrEmpty()]
[System.Management.Automation.ScriptBlock] $ScriptBlock,
[Parameter(Mandatory = $true,
Position =1,
ValueFromPipelineByPropertyName = $true)]
[Alias('Name')]
[System.String] $Function
)
begin {
if ($PSCmdlet.ParameterSetName -eq 'Path') {
$Path = $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath($Path);
}
elseif ($PSCmdlet.ParameterSetName -eq 'LiteralPath') {
## Set $Path reference to the literal path(s)
$Path = $LiteralPath;
}
} # end begin
process {
$errors = #();
$tokens = #();
if ($PSCmdlet.ParameterSetName -eq 'ScriptBlock') {
$ast = [System.Management.Automation.Language.Parser]::ParseInput($ScriptBlock.ToString(), [ref] $tokens, [ref] $errors);
}
else {
$ast = [System.Management.Automation.Language.Parser]::ParseFile($Path, [ref] $tokens, [ref] $errors);
}
[System.Boolean] $isFunctionFound = $false;
$functions = $ast.FindAll({ $args[0] -is [System.Management.Automation.Language.FunctionDefinitionAst] }, $true);
foreach ($f in $functions) {
if ($f.Name -eq $Function) {
Write-Output ([System.Management.Automation.ScriptBlock]::Create($f.Extent.Text));
$isFunctionFound = $true;
}
} # end foreach function
if (-not $isFunctionFound) {
if ($PSCmdlet.ParameterSetName -eq 'ScriptBlock') {
$errorMessage = 'Function "{0}" not defined in script block.' -f $Function;
}
else {
$errorMessage = 'Function "{0}" not defined in "{1}".' -f $Function, $Path;
}
Write-Error -Message $errorMessage;
}
} # end process
} #end function Get-Function