Is the PowerShell Pipeline a continuous feed? - powershell

Do the items being sent into the pipeline from a CmdLet get passed into the next CmdLet immediately, or do they go into an internal PS 'accumulator' that stores them up until the current CmdLet completes and then feeds them into the next?
Is there a parameter that can configure continuous mode? If a CmdLet takes single objects, it can be called as data comes into the pipeline, but if a CmdLet takes an array, perhaps PS accumulates and sends the whole array in. Sorting needs a complete set, for example.
I'm interested in writing long-running, open-ended, streaming style CmdLets.
Thanks. I can't find any in-depth discussion on the pipeline.

First Part: No, PowerShell functions do not necessarily get passed into a cmdlet immediately but instead go into a special variable called $input.
$input is populated with all the incoming pipeline objects before a function begins to execute. If necessary, PowerShell may even delay execution until all incoming pipeline data is complete.
Second Part: If you'd like to continuously process your data instead of waiting for it to come in and be stored in $input, you can use the filter keyword in place of the function key word when writing your functions/filters (filters behave similarly to functions except they don't wait for data, but use it as it becomes available).
e.g.
filter Get-OddEven {
$x = $_ % 2
# ... other logic here
}
Hope this helps, lest I plagiarize though, all of this is gotten from the book "Microsoft Windows PowerShell Programming For the Absolute Beginner" by Jerry Lee Ford, Jr should you want to take a further look.

Related

PowerShell query Windows task scheduler for tasks that will run between specific datetimes

I've inherited a server that runs Windows scheduled tasks for several hundred various processes (mostly it kicks off custom PowerShell scripts) - the schedules for these tasks can be as frequently as every 15 minutes, or as infrequently as once per year on a specific date.
How can I query task scheduler using PowerShell (or really any other means also) to determine what if any tasks will be running between a specific date-time range in the future? This is necessary so we can for example schedule and perform maintenance on this server, and others that it interacts with.
I should start out by mentioning this is a deceptively complex topic. There are few reasons for this, including but not limited to:
Scheduled Tasks as objects can be very complex with widely variable schedules, repetition intervals, other trigger types, and a plethora of other configuration points.
The non-GUI, meaning CLI, cmdlet, and/or API tools have changed over the years. And, it's not always easy to string them together to solve the complexities cited in point 1.
WARNING: This report is going to be very tough to write. Before you embark on a challenge like this you should look for any preexisting tools that may inventory or lend other transparency to the Task Scheduler.
Also, it may be better to define the maintenance window as a matter of policy. Then you can configure jobs to avoid the window. At a glance, this strikes me as much easier than writing a difficult program to assist in the cherry-picking of a time slot.
If you choose to move forward here are a few tools that may help:
Schtasks.exe: A surprisingly capable CLI tool that's been around a long time.
The TaskScheduler PowerShell module: A PowerShell module written around scheduling COM objects. It was originally part of the Windows 7 PowerShell pack, but these days you can get it through the PowerShell Gallery. The easiest way is through typical module cmdlets like Find-Module & Install-Module
The newer ScheduledTasks module installed by default on later Windows systems. It's Written around Cim (WMI) and is installed by default in later versions of PowerShell / Windows.
CAUTION: It's easy to confuse the latter 2 tools. Not only are the names are very similar, but there are overlap and similarity between the commands they contain. For example, both modules have a Get-Scheduledtask cmdlet. Two ways to deal with the confusion:
When importing a Module with the Import-Module cmdlet use the -Prefix parameter to add a prefix to the none part of the cmdlet. then use that prefix when calling the cmdlet thereafter.
Call cmdlets with a qualified name like TaskScheduler\GetScheduledTasks
Now to get at the scheduling data. In my experience, the needed details are only exposed through the task's XML definition. Again, you're going to bump up against the complexity that comes with a wide range of scheduling and/or trigger options. You can read about the XML schema here
Here are some examples of how to access the XML data:
schtasks.exe /query /s pyexadm1 /tn <TaskName> /XML
In this case, you'll have to do additional string manipulation to isolate the XML then cast it to [XML] so you can work with it in a typical PowerShell manner. Obviously, there will be challenges to leveraging this tool more broadly. However, it's very handy to know for quick checks and work, especially where the next tool is not immediately available.
Note: if you don't cite the /TN argument all tasks will be returned. While the next method is easier, it's good to know this approach, it will be handy while you are developing.
The next example uses the older TaskScheduler module (#2 above):
$TaskXML = [XML](TaskScheduler\Get-ScheduledTask -ComputerName <ComputerName>-Name <TaskName>).XML
Note: Above assumes no prefix was used. So, you must cite the source module to prevent confusion with the ScheduledTask module.
This example loads the XML text and converts it to an XmlDocument object in a single line. Then you can access data about the task like below:
$TaskXML.Task.Triggers.CalendarTrigger
This may yield output like:
StartBoundary Enabled ScheduleByWeek
------------- ------- --------------
2020-09-14T08:00:00 true ScheduleByWeek
You can run this in mass by leveraging the pipeline, which might look something like below:
$XMLTaskData =
TaskScheduler\Get-ScheduledTask -ComputerName <ComputerName> -Recurse |
ForEach-Object{ [XML]$_.XML }
In the above pipeline example the resulting $XMLTaskData is an array each element of which is a respective XML task definition.
Note: Use of the -Recurse switch parameter. Given the high number of tasks, I wouldn't be surprised if they were organized into subfolders.
Similarly, you can also use the Export-ScheduledTask cmdlet from the ScheduledTasks module:
$TaskXML = [XML](Export-ScheduledTask -CimSession <ComputerName> -TaskName <TaskName>)
And you can leverage the pipeline like this:
$XMLTaskData =
Get-ScheduledTask -CimSession <ComputerName> |
Export-ScheduledTask |
ForEach-Object{ [XML]$_ }
Like the other piped example, this results in an array of XML task definitions.
Note: In this case, there is no -Recurse parameter. You can specifically cite paths though.
With any of these approaches you obviously need some familiarity with working with XML objects in PowerShell, but there are tons of tutorials or other resources for that.
Again, the complexity here is in dealing with many trigger types and scheduling paradigms. On your road to getting a Minimally Viable Program (MVP), you may want to use these techniques to inventory the existing tasks. That can help you prioritize your development process.
A final point; knowing when a task is going to run may be quite different than know when it's running. For example, a task may run a 1:00 PM, but the duration of the job is variable and unaccounted for. That strikes me as very difficult to contend with. You may need another procedure to look for task completion events in the event logs. You may also need to consider execution time limits which can be found in the XML data.
Here is a script to get the next running jobs
Get-ScheduledTask |
Foreach-object { Get-ScheduledTaskInfo $_} |
where-object {!($_.NextRunTime -eq $null)} |
Select-object Taskname, NextRunTime |
Sort-object -property NextRunTime

Is there an easy way to have Powershell always show the actual value?

These things drive me nuts:
Is there an easy way to have Powershell just show me the empty string and the list with an empty string in it?
For a while I am maintaining a ConvertTo-Expression which converts an (complex) object into a PowerShell expression which eventually can be used to rebuild most objects. It might useful in situations as comparing test results but also to reveal objects. For details see: readme.
Source and installation
The ConvertTo-Expression script can be installed from the PowerShell Gallery:
Install-Script -Name ConvertTo-Expression
As it concerns a standalone script, installation isn't really required. If you don't have administrator rights, you might just download the script (or copy it) to the required location. You might than simply invoke the script using PowerShell dot sourcing:
. .\ConvertTo-Expression.ps1
Example
The following command outputs the same expression as used to build the object:
$Object = [Ordered]#{
'Null' = $Null
'EmptyString' = ''
'EmptyArray' = #()
'SingleItem' = ,''
'EmptyHashtable' = #{}
}
ConvertTo-Expression $Object
Note the comment from #Mathias there's no functional difference between "one string" and "an array of one string", the pipeline consumes 1 string either way. PowerShell is not node which is described here: PowerShell enumerate an array that contains only one inner array. Some objects might be really different than you expect.
See also: Save hash table in PowerShell object notation (PSON)
This is PowerShell, not Node. So it's not JavaScript or JSON. Also, PowerShell is not Bash or CMD any other regular text-based shell. PowerShell works with objects. .NET objects, in particular. And how objects are represented as text is ... quite a matter of taste. How to represent null? Of course: nothing. How to represent an empty string? Nothing, either. An empty array ... you get my point.
All pipeline output is by default send to Out-Default. In general, the way objects are represented can be controlled by format files: about_Format.ps1xml and about_Types.ps1xml. From PowerShell 6 upwards, the default formats are compiled into the source code, but you can extend them. How you do so, depends on your personal taste. Some options were already mentioned ConvertTo-Json "", ConvertTo-Json #("")), but this would be veryyy JSON-specific.
tl;dr Don't care too much about how objects are represented textually. As you see, there are many possible ways to do so, and also some others. Just make sure your scripts are always object-oriented.
You mean like Python's repr() function? A serialization? "Give me a canonical representation of the object that, when property evaluated, will return an object of the type originally passed?" No, that's not possible unless you write it yourself or you serialize it to XML, JSON, or similar. That's what the *-CliXml and ConvertTo-*/ConvertFrom-* commands are for.
On Powershell 7:
PS C:\> ''.Split(',') | ConvertTo-Json -Compress -AsArray
[""]
PS C:\> '1,2,3,,5'.Split(',') | ConvertTo-Json -Compress -AsArray
["1","2","3","","5"]
The closest would be the ToString() common method. However, that's intended for formatting output and typecasting, not canonical representations or serializations. Not every object even has a string representation that can be converted back into that object. The language isn't designed with that in mind. It's based on C#, a compiled language that favors binary serializations. Javascript requires essentially everything to have a string serialization in order to fulfill it's original design. Python, too, has string representations as fundamental to it's design which is why the repr() function is foundational for serialization and introspection and so on.
C# and .Net go the other way. In a .Net application, if you want to specify a literal empty string you're encouraged to use String.Empty ([String]::Empty in Powershell) because it's easier to see that it's explicit. Why would you ever want a compiled application to tell the user how the canonical representation of an object in memory? You see why that's not a useful thing for C# even if it might be for Powershell?

Add auto complete features for switch in powershell

I am attempting to add auto-complete features in powershell. In this case I would like to be able to type "test" in my console. After that to be able to type Get-Se[TAB] to auto complete to Get-Search using TAB Expansion.
PS > Get-Se[TAB]
PS > Get-Search
function test
{
[CmdletBinding()]
param()
# label the while loop "outer"
:outer while($true){
$x = Read-Host
# split $x into two parts
$first,$second = $x -split '\s',2
# switch evaluating the first part
switch($first){
Get-Search {
# Searching
}
default {
Write-Host "False"
}
}
}
}
Additional Information:
Goal:
I'd like to be able to use arguments that look like cmdlets to have the Powershell feel.
About the original script:
I have created a script to automate queries from several API's, for many different users. What I have right now for search is "s" and I'd like it to be "Get-Search" So Read-Host waits for an input, the user would type "Get-Search 'value'" and a formatted JSON returns.
PS > Get-Search foobar
#Returns JSON
I had a hard time understanding your intention at first, but I think I get it now.
You want to implement tab completion (tab expansion) inside the Read-Host prompt.
Unfortunately, there is no way to do that.
If you share why you want this, there may be better ways to achieve your ultimate goal.
Based on your additional information, I have a different approach.
Create actual functions for each of your queries, like Get-Search, etc. You can even add aliases for them so that s corresponds directly if you want.
Wrap all of these functions in a proper module, so that you can import them (see next step).
Create a constrained runspace that only allows the user to execute the specific functions and aliases you want (this is easier with a module, but the module is not a requirement).
What this can do is give your end users access (even remotely) to a PowerShell session which can only use the functions you've created and allowed to be executed. Other cmdlets/functions and even language features like using variables will be restricted and unavailable.
That way, you get true PowerShell tab expansion and semantics, and you end up with a real set of functions that be used in an automated way as well.
You don't have to write any prompting or parsing.
Further, the session can be secured, allowing only specific users and groups to connect to it.

How to change default output Formatting of Powershell to use Format-Table -autosize?

How can I enforce powershell to use
Format-Table -auto
as a default formatting when writing a returned array of objects to the console?
Thanks
If you are OK calling the cmdlet every time and if you have at least PowerShell v3.0 then you can set a $PSDefaultParameterValues which you can read more about at about_Parameters_Default_Values.
The syntax that would satisfy your need would be:
$PSDefaultParameterValues=#{"<CmdletName>:<ParameterName>"="<DefaultValue>"}
So we add in the switch by setting it to $true.
$PSDefaultParameterValues = #{"Format-Table:Autosize"=$true}
To remove this you would have to do it much the same as you would a hashtable element
$PSDefaultParameterValues.Remove("Format-Table:Autosize")
From the aforementioned article here is some pertinent information as to how to deal with these.
$PSDefaultParameterValues is a preference variable, so it exists only in the session
in which it is set. It has no default value.
To set $PSDefaultParameterValues, type the variable name and one or more key-value pairs
at the command line.
If you type another $PSDefaultParameterValues command, its value replaces the original
value. The original is not retained.
To save $PSDefaultParameterValues for future sessions, add a $PSDefaultParameterValues
command to your Windows PowerShell profile. For more information, see about_Profiles.
Outside of that I am not sure as it would be difficult to change in a dynamic sense. You would want to be sure that data sent to the stream appears on screen in the same way that format-table -auto does but you would have to make sure that it does not affect data so that you could not capture it or send it down the pipe.
You are looking at creating custom output format files, like Frode F. talks about, then you would need to consider looking at about_Format.ps1xml but you would need to configure this for every object that you would want to display this way.
FileSystem.format.ps1xml, for example, would govern the output from Get-ChildItem. Format-Table is more dynamic and I don't think you can say just use Format-Table in that file.

Is it possible for PowerShell to write message to multiple target

Is there a usage of pipeline for PowerShell to Write-Output & write to file in the same time, without using a custom wrapping function?
Take a look at Tee-Object. From help:
The Tee-Object cmdlet sends the output of a command in two directions
(like the letter "T"). It stores the output in a file or variable and
also sends it down the pipeline. If Tee-Object is the last command in
the pipeline, the command output is displayed in the console.