Re-execute powershell script on some condition after conversion to EXE - powershell

I have a script that I converted to EXE so I can run it as a service. I am trying to make the script execute itself again on some condition. I have a for loop in the script and I want to break and re-execute so it would similar to this:
foreach{
if(Get-Date -format "yyyyMMdd" != $CURRENTDATE){
**re-execute script**
}
send_email($_)
}

Wrap your code into function and in if call this function.

There are a few ways to do this.
You can make a function, and call it directly:
function myFunction($myCollection)
foreach{
if(Get-Date -format "yyyyMMdd" != $CURRENTDATE){
myFunction $myCollection)
}
send_email($_)
}
More info for using functions can be found here (They are odd things compared to C syntax)
Would probably be the optimal route.
Another route would be a while loop:
while(Get-Date -format "yyyyMMdd" != $CURRENTDATE)
{
foreach{
#Iterate foreach here, then exits to loop, where it checks condition again
#If still does not match todays date, it goes through foreach loop again.
}
} #loop
send_email($_) #Script will not get here until it meets your date check condition

If you want to run this as a service you need some way to limit the frequency that the script checks the date, otherwise it'll use a lot of CPU!
There are a couple of ways you could do this. You could take other answers here and add the Start-Sleep cmdlet with a suitable amount of time in the for/while loop.
However I'm thinking you might have a reason to rerun your EXE. The other answers wouldn't rerun the EXE, they would just keep the EXE running until the date rolled around.
If you do want to rerun your EXE you can just enter the EXE's command line where you have **re-execute script**. You won't need the for loop, but you do need to invoke Start-Sleep to something reasonable.

Related

Powershell Profile to append parameters to a certain command

I have a certain command that I want to be able to append a parameter to as a powershell profile function. Though I'm not quite sure the best way to be able to capture each time this command is run, any insight would be helpful.
Command: terraform plan
Each time a plan is run I want to be able to check the parameters and see if -lock=true is passed in and if not then append -lock=false to it. Is there a suitable way to capture when this command is run, without just creating a whole new function that builds that command? So far the only way I've seen to capture commands is with Start-Transcript but that doesn't quite get me to where I need.
The simplest approach is to create a wrapper function that analyzes its arguments and adds -lock=false as needed before calling the terraform utility.
function terraform {
$passThruArgs = $args
if (-not ($passThruArgs -match '^-lock=')) { $passThruArgs += '-lock=false'}
& (Get-Command -Type Application terraform) $passThruArgs
}
The above uses the same name as the utility, effectively shadowing the latter, as is your intent.
However, I would caution against using the same name for the wrapper function, as it can make it hard to understand what's going on.
Also, if defined globally via $PROFILE or interactively, any unsuspecting code run in the same session will call the wrapper function, unless an explicit path or the shown Get-Command technique is used.
Not to take away from the other answer posted, but to offer an alternative solution here's my take:
$Global:CMDLETCounter = 0
$ExecutionContext.InvokeCommand.PreCommandLookupAction = {
Param($CommandName, $CommandLookupEvents)
if ($CommandName -eq 'terraform' -and $Global:CMDLETCounter -eq 0)
{
$Global:CMDLETCounter++
$CommandLookupEvents.CommandScriptBlock = {
if ($Global:CMDLETCounter -eq 1)
{
if (-not ($args -match ($newArg = '-lock=')))
{
$args += "${newArg}true"
}
}
& "terraform" #args
$Global:CMDLETCounter--
}
}
}
You can make use of the $ExecutionContext automatic variable to tap into PowerShells parser and insert your own logic for a specific expression. In your case, youd be using terraform which the command input will be parsed for each token and checked against -lock= in the existing arguments. If not found, append -lock=true to the current arguments and execute the command again.
The counter you see ($Global:CMDLETCounter) is to prevent an endless loop as it would just recursively call itself without there being something to halt it.

How to save the results of a function to a text file in Powershell

I have the function below that produce multiple outputs, is there a way I can put all the outputs of the function in a text file. I tried below to use Out-File it did not work any suggestions?
cls
function functionAD {Write-output ""...}
functionAD | Out-File -FilePath C:\test\task5.txt -Append
the script above still did not work.
UPDATE: This is, in fact, possible if you overwrite the Write-Host function. It can be done like this:
function Write-Host($toWrite) {
Write-Output $toWrite
}
Copy and paste this code into your PowerShell console, then run the program.
Don't worry about permanently overwriting the Write-Host command, this will only last for the current session.
OLD COMMENT:
Unfortunately, Write-Host can not be rerouted to another file stream. It is the only 'write' command that acts in that way. That is why PowerShell programmers generally try to avoid using it unless there is a specific reason to. It is intended for messages sent directly to the user and is thus send to the program (powershell) itself rather than a console.
I would suggest using some other command if the function is your own. Write-Output is always a safe bet because it can be redirected to any other stream.
Here is a link if you have more questions: https://devblogs.microsoft.com/scripting/understanding-streams-redirection-and-write-host-in-powershell/

how to prevent external script from terminating your script with break statement

I am calling an external .ps1 file which contains a break statement in certain error conditions. I would like to somehow catch this scenario, allow any externally printed messages to show as normal, and continue on with subsequent statements in my script. If the external script has a throw, this works fine using try/catch. Even with trap in my file, I cannot stop my script from terminating.
For answering this question, assume that the source code of the external .ps1 file (authored by someone else and pulled in at run time) cannot be changed.
Is what I want possible, or was the author of the script just not thinking about playing nice when called externally?
Edit: providing the following example.
In badscript.ps1:
if((Get-Date).DayOfWeek -ne "Yesterday"){
Write-Warning "Sorry, you can only run this script yesterday."
break
}
In myscript.ps1:
.\badscript.ps1
Write-Host "It is today."
The results I would like to achieve is to see the warning from badscript.ps1 and for it to continue on with my further statements in myscript.ps1. I understand why the break statement causes "It is today." to never be printed, however I wanted to find a way around it, as I am not the author of badscript.ps1.
Edit: Updating title from "powershell try/catch does not catch a break statement" to "how to prevent external script from terminating your script with break statement". The mention of try/catch was really more about one failed solution to the actual question which the new title better reflects.
Running a separate PowerShell process from within my script to invoke the external file has ended up being a solution good enough for my needs:
powershell -File .\badscript.ps1 will execute the contents of badscript.ps1 up until the break statement including any Write-Host or Write-Warning's and let my own script continue afterwards.
I get where you're coming from. Probably the easiest way would be to push the script off as a job, and wait for the results. You can even echo the results out with Receive-Job after it's done if you want.
So considering the bad script you have above, and this script file calling it:
$path = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
$start = Start-Job -ScriptBlock { . "$using:Path\badScript.ps1" } -Name "BadScript"
$wait = Wait-Job -Name "BadScript" -Timeout 100
Receive-Job -Name "BadScript"
Get-Command -Name "Get-ChildItem"
This will execute the bad script in a job, wait for the results, echo the results, and then continue executing the script it's in.
This could be wrapped in a function for any scripts you might need to call (just to be on the safe side.
Here's the output:
WARNING: Sorry, you can only run this script yesterday.
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Get-ChildItem 3.1.0.0 Microsoft.PowerShell.Management
In the about_Break documentation it says
PowerShell does not limit how far labels can resume execution. The
label can even pass control across script and function call
boundaries.
This got me thinking, "How can I trick this stupid language design choice?". And the answer is to create a little switch block that will trap the break on the way out:
.\NaughtyBreak.ps1
Write-Host "NaughtyBreak about to break"
break
.\OuterScript.ps1
switch ('dummy') { default {.\NaughtyBreak.ps1}}
Write-Host "After switch() {NaughtyBreak}"
.\NaughtyBreak.ps1
Write-Host "After plain NaughtyBreak"
Then when we call OuterScript.ps1 we get
NaughtyBreak about to break
After switch() {NaughtyBreak}
NaughtyBreak about to break
Notice that OuterScript.ps1 correctly resumed after the call to NaughtyBreak.ps1 embedded in the switch, but was unceremoniously killed when calling NaughtyBreak.ps1 directly.
Putting break back inside a loop (including switch) where it belongs.
foreach($i in 1) { ./badscript.ps1 }
'done'
Or
switch(1) { 1 { ./badscript.ps1 } }
'done'

Foreach of a piped number array isnt breaking properly

I dont quite understand this, but why doesn't the following code not work:
"start"
1..5 | foreach {
"$_"
break
}
"stop"
I've done a couple tests and this code does work properly :
"start"
foreach ($num in 1..5){
"$num"
break
}
"stop"
Is there a way to make the first example run properly? The last outputted line should be "stop".
Like so:
start
1
stop
First, you should know that you are using two entirely different language features when you use foreach ($thing in $things) {} vs. $things | foreach { }.
The first is the built-in foreach statement, and the second is an alias for ForEach-Object, and they work very differently.
ForEach-Object runs the scriptblock for each of the items, and it works within a pipeline.
The break statement in that case is only breaking out of the current item's execution. The "parent" so-to-speak doesn't know that the scriptblock exited because of break and it continues, executing the scriptblock for the next object.
How you would go about limiting the results depends on what you want to do.
If you just want to stop producing results, just don't return anything if the condition is met. You'll still run every iteration, but the results will be correct.
If you only need to return a certain number of items, like the first N items, the best way (from PowerShell v3 on) is to add Select-Object:
1..10 | ForEach-Object {
$_*2
} | Select-Object -First 5
This will only execute 5 times, and it will return the sequence 2,4,6,8,10.
This is because of how the pipeline works where each object gets sent through each cmdlet, and Select-Object can stop the pipeline so it doesn't keep executing.
Pre-version 3.0, the pipeline cannot be stopped in that way, and although the results will be correct, you won't have prevented the extra executions.
If you give more details on what your conditions are for exiting, I could give more input as to how you'd want to approach that particular problem (which may involve not using ForEach-Object).

Is it possible to make a cmdlet work with all items being piped into it at once?

Instead of counting sheep this evening, I created a cmdlet that lists all duplicate files in a directory. It's dirt stupid simple and it can only work with all files in a directory, and I'm not keen on reinventing the wheel to add filtering, so here's what I want to do with it instead:
dir | duplicates | del
The only catch is that, normally, any given command in the pipe only works with one object at a time, which will do no good whatsoever for detecting duplicates. (Of course there are no duplicates in a set of one, right?)
Is there a trick I can use to have the second command in the chain collect all the output from the first before doing its job and passing things on to the third command?
You can work with a single file at a time, you just have to store each file you receive it the Process block and then process all the files in an End block. This is how commands like Group & Sort work. They can't group or sort until they have all the input. Once they have all the input, they do their operation and then begin streaming the results down the pipeline again in grouped/sorted order.
So I actually came up with the answer while I was in the shower and came back to find Keith had already provided it. Here's an example anyway.
begin
{
Add-Type -Path ($env:USERPROFILE + '\bin\CollectionHelper.cs');
[string[]] $files = #()
}
process
{
$files += $FullName
}
end
{
[JMA.CollectionHelper]::Duplicates($files)
}