I use the following command to run a pipeline.
.\Find-CalRatioSamples.ps1 data16 `
| ? {-Not (Test-GRIDDataset -JobName DiVertAnalysis -JobVersion 13 -JobSourceDatasetName $_ -Exists -Location UWTeV-linux)}
The first is a custom script of mine, and runs very fast (miliseconds). The second is a custom command, also written by me (see https://github.com/LHCAtlas/AtlasSSH/blob/master/PSAtlasDatasetCommands/TestGRIDDataset.cs). It is very slow.
Actually, it isn't so slow processing each line of input. The setup before the first line of input can be processed is very expensive. That done, however, it goes quite quickly. So all the expensive code gets executed once, and only the fairly fast code needs to be executed for each new pipeline input.
Unfortunately, when I want to do the ? { } construct above, it seems like PowerShell doesn't keep the pipe-line as it did before. It now calls me command a fresh time for each line of input, causing the command to redo all the setup for each line.
Is there something I can change in how I invoke the pipe-line? Or in how I've coded up my cmdlet to prevent this from happening? Or am I stuck because this is just the way Where-Object works?
It is working as designed. You're starting a new (nested) pipeline inside the scriptblock when you call your command.
If your function is doing the expensive code in its Begin block, then you need to directly pipe the first script into your function to get that advantage.
.\Find-CalRatioSamples.ps1 data16 |
Test-GRIDDataset -JobName DiVertAnalysis -JobVersion 13 -Exists -Location UWTeV-linux |
Where-Object { $_ }
But then it seems that you are not returning the objects you want (the original).
One way you might be able to change Test-GRIDDataset is to implement a -PassThru switch, though you aren't actually accepting the full objects from your original script, so I'm unable to tell if this is feasible; but the code you wrote seems to be retrieving... stuff(?) from somewhere based on the name. Perhaps that would be sufficient? When -PassThru is specified, send the objects through the pipeline if they exist (rather than just a boolean of whether or not they do).
Then your code would look like this:
.\Find-CalRatioSamples.ps1 data16 |
Test-GRIDDataset -JobName DiVertAnalysis -JobVersion 13 -Exists -Location UWTeV-linux -PassThru
Related
I have a certain command that I want to be able to append a parameter to as a powershell profile function. Though I'm not quite sure the best way to be able to capture each time this command is run, any insight would be helpful.
Command: terraform plan
Each time a plan is run I want to be able to check the parameters and see if -lock=true is passed in and if not then append -lock=false to it. Is there a suitable way to capture when this command is run, without just creating a whole new function that builds that command? So far the only way I've seen to capture commands is with Start-Transcript but that doesn't quite get me to where I need.
The simplest approach is to create a wrapper function that analyzes its arguments and adds -lock=false as needed before calling the terraform utility.
function terraform {
$passThruArgs = $args
if (-not ($passThruArgs -match '^-lock=')) { $passThruArgs += '-lock=false'}
& (Get-Command -Type Application terraform) $passThruArgs
}
The above uses the same name as the utility, effectively shadowing the latter, as is your intent.
However, I would caution against using the same name for the wrapper function, as it can make it hard to understand what's going on.
Also, if defined globally via $PROFILE or interactively, any unsuspecting code run in the same session will call the wrapper function, unless an explicit path or the shown Get-Command technique is used.
Not to take away from the other answer posted, but to offer an alternative solution here's my take:
$Global:CMDLETCounter = 0
$ExecutionContext.InvokeCommand.PreCommandLookupAction = {
Param($CommandName, $CommandLookupEvents)
if ($CommandName -eq 'terraform' -and $Global:CMDLETCounter -eq 0)
{
$Global:CMDLETCounter++
$CommandLookupEvents.CommandScriptBlock = {
if ($Global:CMDLETCounter -eq 1)
{
if (-not ($args -match ($newArg = '-lock=')))
{
$args += "${newArg}true"
}
}
& "terraform" #args
$Global:CMDLETCounter--
}
}
}
You can make use of the $ExecutionContext automatic variable to tap into PowerShells parser and insert your own logic for a specific expression. In your case, youd be using terraform which the command input will be parsed for each token and checked against -lock= in the existing arguments. If not found, append -lock=true to the current arguments and execute the command again.
The counter you see ($Global:CMDLETCounter) is to prevent an endless loop as it would just recursively call itself without there being something to halt it.
In a PowerShell (5.1 and later) Script Module, I want to ensure that every Script and System exception which is thrown calls a logging Cmdlet (e.g. Write-Log). I know that I can wrap all code within the module Cmdlets into try/catch blocks, but my preferred solution would be to use trap for all exceptions thrown while executing Cmdlets of the Module
trap { Write-Log -Level Critical -ErrorRecord $_ }
Using the above statement works as intended if I add it to each Cmdlet inside the module, but I would like to only have one trap statement which catches all exceptions thrown by Cmdlets of the Module to not replicate code and also ensure that I do not miss the statement in any Cmdlet. Is this possible?
What I would do is this.
Set multiple Try/Catch block as needed.
Group multiple cmdlet calls under the same block when you can. As you mentionned, we don't want to group everything under 1 giant try/catch block but still, related calls can go together.
Design your in-module functions as Advanced functions, so you can make use of the common parameters, such as... -ErrorAction
Set $PSDefaultParameterValues = #{'*:ErrorAction'='Stop'} so all cmdlets that support -ErrorAction don't fall through the try/catch block.
(You could also manually set -ErrorAction Stop everywhere but since you want this as default, it make sense to do it that way. In any cases You don't want to touch $ErrorActionPreference as this has a global scope amd your users won't like you if you change defaults outside of the module scope.)
You can also redirect the error stream to a file so instead of showing up in the output, it is written to a file.
Here's a self contained example of this:
& {
Write-Warning "hello"
Write-Error "hello"
Write-Output "hi"
} 2>> 'C:\Temp\redirection.log'
See : About_Redirection for more on this.
(Now I am wondering if you can redirect the stream to something else than a file
Additional note
External modules can help with logging too and might provide a more streamlined approach.
I am not familiar with any of them though.
I know PSFramework have some interesting stuff regarding logging.
You can take a look and experiment to see if fit your needs.
Otherwise, you can do a research on PSGallery for logging modules
(this research is far from perfect but some candidates might be interesting)
find-module *logging* | Select Name, Description, PublishedDate,Projecturi | Sort PublishedDate -Descending
I have the function below that produce multiple outputs, is there a way I can put all the outputs of the function in a text file. I tried below to use Out-File it did not work any suggestions?
cls
function functionAD {Write-output ""...}
functionAD | Out-File -FilePath C:\test\task5.txt -Append
the script above still did not work.
UPDATE: This is, in fact, possible if you overwrite the Write-Host function. It can be done like this:
function Write-Host($toWrite) {
Write-Output $toWrite
}
Copy and paste this code into your PowerShell console, then run the program.
Don't worry about permanently overwriting the Write-Host command, this will only last for the current session.
OLD COMMENT:
Unfortunately, Write-Host can not be rerouted to another file stream. It is the only 'write' command that acts in that way. That is why PowerShell programmers generally try to avoid using it unless there is a specific reason to. It is intended for messages sent directly to the user and is thus send to the program (powershell) itself rather than a console.
I would suggest using some other command if the function is your own. Write-Output is always a safe bet because it can be redirected to any other stream.
Here is a link if you have more questions: https://devblogs.microsoft.com/scripting/understanding-streams-redirection-and-write-host-in-powershell/
I am calling an external .ps1 file which contains a break statement in certain error conditions. I would like to somehow catch this scenario, allow any externally printed messages to show as normal, and continue on with subsequent statements in my script. If the external script has a throw, this works fine using try/catch. Even with trap in my file, I cannot stop my script from terminating.
For answering this question, assume that the source code of the external .ps1 file (authored by someone else and pulled in at run time) cannot be changed.
Is what I want possible, or was the author of the script just not thinking about playing nice when called externally?
Edit: providing the following example.
In badscript.ps1:
if((Get-Date).DayOfWeek -ne "Yesterday"){
Write-Warning "Sorry, you can only run this script yesterday."
break
}
In myscript.ps1:
.\badscript.ps1
Write-Host "It is today."
The results I would like to achieve is to see the warning from badscript.ps1 and for it to continue on with my further statements in myscript.ps1. I understand why the break statement causes "It is today." to never be printed, however I wanted to find a way around it, as I am not the author of badscript.ps1.
Edit: Updating title from "powershell try/catch does not catch a break statement" to "how to prevent external script from terminating your script with break statement". The mention of try/catch was really more about one failed solution to the actual question which the new title better reflects.
Running a separate PowerShell process from within my script to invoke the external file has ended up being a solution good enough for my needs:
powershell -File .\badscript.ps1 will execute the contents of badscript.ps1 up until the break statement including any Write-Host or Write-Warning's and let my own script continue afterwards.
I get where you're coming from. Probably the easiest way would be to push the script off as a job, and wait for the results. You can even echo the results out with Receive-Job after it's done if you want.
So considering the bad script you have above, and this script file calling it:
$path = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
$start = Start-Job -ScriptBlock { . "$using:Path\badScript.ps1" } -Name "BadScript"
$wait = Wait-Job -Name "BadScript" -Timeout 100
Receive-Job -Name "BadScript"
Get-Command -Name "Get-ChildItem"
This will execute the bad script in a job, wait for the results, echo the results, and then continue executing the script it's in.
This could be wrapped in a function for any scripts you might need to call (just to be on the safe side.
Here's the output:
WARNING: Sorry, you can only run this script yesterday.
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Get-ChildItem 3.1.0.0 Microsoft.PowerShell.Management
In the about_Break documentation it says
PowerShell does not limit how far labels can resume execution. The
label can even pass control across script and function call
boundaries.
This got me thinking, "How can I trick this stupid language design choice?". And the answer is to create a little switch block that will trap the break on the way out:
.\NaughtyBreak.ps1
Write-Host "NaughtyBreak about to break"
break
.\OuterScript.ps1
switch ('dummy') { default {.\NaughtyBreak.ps1}}
Write-Host "After switch() {NaughtyBreak}"
.\NaughtyBreak.ps1
Write-Host "After plain NaughtyBreak"
Then when we call OuterScript.ps1 we get
NaughtyBreak about to break
After switch() {NaughtyBreak}
NaughtyBreak about to break
Notice that OuterScript.ps1 correctly resumed after the call to NaughtyBreak.ps1 embedded in the switch, but was unceremoniously killed when calling NaughtyBreak.ps1 directly.
Putting break back inside a loop (including switch) where it belongs.
foreach($i in 1) { ./badscript.ps1 }
'done'
Or
switch(1) { 1 { ./badscript.ps1 } }
'done'
I'm writing a function in PowerShell that I want to be called via other PowerShell functions as well as be used as a standalone function.
With that objective in mind, I want to send a message down the pipeline using Write-Output to these other functions.
However, I don't want Write-Output to write to the PowerShell console. The TechNet page for Write-Output states:
Write-Output:
Sends the specified objects to the next command in the pipeline. If the command is the last command in the pipeline, the objects are displayed in the console.
-NoEnumerate:
By default, the Write-Output cmdlet always enumerates its output. The NoEnumerate parameter suppresses the default behavior, and prevents Write-Output from enumerating output. The NoEnumerate parameter has no effect on collections that were created by wrapping commands in parentheses, because the parentheses force enumeration.
For some reason, this -NoEnumerate switch will not work for me in either the PowerShell ISE or the PowerShell CLI. I always get output to my screen.
$data = "Text to suppress"
Write-Output -InputObject $data -NoEnumerate
This will always return 'Text to suppress' (no quotes).
I've seen people suggest to pipe to Out-Null like this:
$data = "Text to suppress"
Write-Output -InputObject $data -NoEnumerate | Out-Null
$_
This suppresses screen output, but when I use $_ I have nothing in my pipeline afterwards which defeats the purpose of me using Write-Output in the first place.
System is Windows 2012 with PowerShell 4.0
Any help is appreciated.
Write-Output doesn't write to the console unless it's the last command in the pipeline. In your first example, Write-Output is the only command in the pipeline, so its output is being dumped to the console. To keep that from happening, you need to send the output somewhere. For example:
Write-Output 5
will send "5" to the console, because Write-Output is the last and only command in the pipeline. However:
Write-Output 5 | Start-Sleep
no longer does that because Start-Sleep is now the next command in the pipeline, and has therefore become the recipient of Write-Output's data.
Try this:
Write your function as you have written it with Write-Output as the last command in the pipeline. This should send the output up the line to the invoker of the function. It's here that the invoker can use the output, and at the same time suppress writing to the console.
MyFunction blah, blah, blah | % {do something with each object in the output}
I haven't tried this, so I don't know if it works. But it seems plausible.
My question is not the greatest.
First of all Write-Output -NoEnumerate doesn't suppress output on Write-Output.
Secondly, Write-Output is supposed to write its output. Trying to make it stop is a silly goal.
Thirdly, piping Write-Output to Out-Null or Out-File means that the value you gave Write-Output will not continue down the pipeline which was the only reason I was using it.
Fourth, $suppress = Write-Output "String to Suppress" also doesn't pass the value down the pipeline.
So I'm answering my question by realizing if it prints out to the screen that's really not a terrible thing and moving on. Thank you for your help and suggestions.
Explicitly storing the output in a variable would be more prudent than trying to use an implicit automatic variable. As soon as another command is run, that implicit variable will lose the prior output stored in it. No automatic variable exists to do what you're asking.
If you want to type out a set of commands without storing everything in temporary variables along the way, you can write a scriptblock at the command line as well, and make use of the $_ automatic variable you've indicated you're trying to use.
You just need to start a new line using shift + enter and write the code block as you would in a normal scriptblock - in which you could use the $_ automatic variable as part of a pipeline.