Colorize findstr/grep output in PowerShell - powershell

I've found quite a few questions/answers and other types of posts that talk about using alias to substitute, or alias, grep for findstr. While this is useful, one of things I like about grep in many terminal environments is that the found text is highlighted; often in red.
findstr /A:color seems to imply that this should be possible and even easy...but I've yet to make it work in a simple statement like:
set-alias grep "findstr /A:color F0"
Is there way to do this?
Thanks

I seldomely create new aliases, so there might very well be other (better) ways than the one I write above below, but this way seems to work.
The help section on Set-Alias, for the parameter Value states:
-Value
Specifies the name of the cmdlet or command element that is being aliased.
So it expects the aliased command to either be a cmdlet or a command element.
If you do Get-Help Set-Alias -Examples you'll see the last example which does something similar to what you are hoping for, to call a command and specify some hard coded parameters. It does this with the following code:
PS C:\>function CD32 {set-location c:\windows\system32}
PS C:\>set-alias go cd32
So what you need to do is to create a function which does what you want and then alias that command. An example of this would be:
function Search-Text
{
& c:\Windows\system32\findstr.exe /A:F0 $Args
}
Then you could alias that new function to grep like:
Set-Alias grep Search-Text
And finally just call it like:
grep mysearchword file1.txt file2.txt
Note: The above code works fine for me when I run it in the ordinary PowerShell shell (probably since this is actually the ordinary command shell with some extra stuff), but does not get the expected colored printing in the ISE (obviously the ISE handles the host printing differently).

Related

Why is PS Get-ChildItem so difficult

I did a ton of reading and searching about a way to have Get-ChildItem return a dir listing in wide format, in alphabetical order, with the number of files and directories in the current directory. Here is a image of what I ended up with, but not using GCI.
I ended up writing a small PS file.
$bArgs = "--%/c"
$cArgs = "Dir /n/w"
& cmd.exe -ArgumentList $bArgs $cArgs
As you can see I ended up using the old cmd.exe and passing the variables I wanted. I made an alias in my PS $Profile to call this script.
Can this not be accomplished in PS v5.1? Thanks for any help or advice for an old noob.
PowerShell's for-display formatting differs from cmd.exe's, so if you want the formatting of the latter's internal dir command, you'll indeed have to call it via cmd /c, via a function you can place in your $PROFILE file (note that aliases in PowerShell are merely alternative names and can therefore not include baked-in arguments):
function lss { cmd /c dir /n /w /c $args }
Note that you lose a key benefit of PowerShell: the ability to process rich objects:
PowerShell-native commands output rich objects that enable robust programmatic processing; e.g., Get-ChildItem outputs System.IO.FileInfo and System.IO.DirectoryInfo instances; the aspect of for-display formatting is decoupled from the data output, and for-display formatting only kicks in when printing to the display (host), or when explicitly requested.
For instance, (Get-ChildItem -File).Name returns an array of all file names in the current directory.
By contrast, PowerShell can only use text to communicate with external programs, which makes processing cumbersome and brittle, if information must be extracted via text parsing.
As Pierre-Alain Vigeant notes, the following PowerShell command gives you at least similar output formatting as your dir command, though it lacks the combined-size and bytes-free summary at the bottom:
Get-ChildItem | Format-Wide -AutoSize
To wrap that up in a function, use:
function lss { Get-ChildItem #args | Format-Wide -Autosize }
Note, however, that - due to use of a Format-* cmdlet, all of which output objects that are formatting instructions rather than data - this function's output is also not suited to further programmatic processing.
A proper solution would require you to author custom formatting data and associate them with the System.IO.FileInfo and System.IO.DirectoryInfo types, which is nontrivial however.
See the conceptual about_Format.ps1xml help topic, Export-FormatData, Update-FormatData, and this answer for a simple example.

Shorter execution of powershell script

Say I have a script to be executed in a single call, how do I do it?
Like, say I have a powershell script saved at E:\Fldr\scrpt.ps1.
Now if I have to normally execute that script is PowerShell ISE then I would have to use:
& "E:\Fldr\scrpt.ps1"
and the scrpt.ps1 gets executed.
But whatI want is, when I write a word, say "exeScrpt" instead of & "E:\Fldr\scrpt.ps1" then I want scrpt.ps1 to get executed.
Is there a way to do this?
Thank you for checking in..
You can wrap your call to the script in a function:
function Invoke-Script
{
E:\Fldr\scrpt.ps1
}
Then you can run your script by executing a single command anywhere after the definition:
Invoke-Script
Note that it is good practice to name your functions according to the Verb-Noun cmdlet naming standard, so something like Invoke-Script instead of exeScrpt. If you really want a single word as the name, then you can additionally create an alias for your function:
New-Alias -Name exeScrpt -Value Invoke-Script

Microsoft's Consistency in PowerShell CmdLet Parameter Naming

Let's say I wrote a PowerShell script that includes this commmand:
Get-ChildItem -Recurse
But instead I wrote:
Get-ChildItem -Re
To save time. After some time passed and I upgraded my PowerShell version, Microsoft decided to add a parameter to Get-ChildItem called "-Return", that for example returns True or False depending if any items are found or not.
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected? I understand Microsoft's attempt to save my typing time, but this is my concern and therefore I will probably always try to write the complete parameter name.
Unless of course you know something I don't. Thank you for your insight!
This sounds more like a rant than a question, but to answer:
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected?
Yes!
You should always use the full parameter names in scripts (or any other snippet of reusable code).
Automatic resolution of partial parameter names, aliases and other shortcuts are great for convenience when using PowerShell interactively. It lets us fire up powershell.exe and do:
ls -re *.ps1|% FullName
when we want to find the path to all scripts in the profile. Great for exploration!
But if I were to incorporate that functionality into a script I would do:
Get-ChildItem -Path $Home -Filter *.ps1 -Recurse |Select-Object -ExpandProperty FullName
not just for the reasons you mentioned, but also for consistency and readability - if a colleague of mine comes along and maybe isn't familiar with the shortcuts I'm using, he'll still be able to discern the meaning and expected output from the pipeline.
Note: There are currently three open issues on GitHub to add warning rules for this in PSScriptAnalyzer - I'm sure the project maintainers would love a hand with this :-)

Referencing text after script is called within PS1 Script

Let's take the PowerShell statement below as an example:
powershell.exe c:\temp\windowsbroker.ps1 IIS
Is it possible to have it scripted within windowsbroker.ps1 to check for that IIS string, and if it's present to do a specific install script? The broker script would be intended to install different applications depending on what string followed it when it was called.
This may seem like an odd question, but I've been using CloudFormation to spin up application environments and I'm specifying an "ApplicationStack" parameter that will be referenced at the time when the powershell script is run so it knows which script to run to install the correct application during bootup.
What you're trying to do is called argument or parameter handling. In its simplest form PowerShell provides all arguments to a script in the automatic variable $args. That would allow you to check for an argument IIS like this:
if ($args -contains 'iis') {
# do something
}
or like this if you want the check to be case-sensitive (which I wouldn't recommend, since Windows and PowerShell usually aren't):
if ($args -ccontains 'IIS') {
# do something
}
However, since apparently you want to use the argument as a switch to trigger specific behavior of your script, there are better, more sophisticated ways of doing this. You could add a Param() section at the top of your script and check if the parameter was present in the arguments like this (for a list of things to install):
Param(
[Parameter()]
[string[]]$Install
)
$Install | ForEach-Object {
switch ($_) {
'IIS' {
# do something
}
...
}
}
or like this (for a single option):
Param(
[switch]$IIS
)
if ($IIS.IsPresent) {
# do something
}
You'd run the script like this:
powershell "c:\temp\windowsbroker.ps1" -Install "IIS",...
or like this respectively:
powershell "c:\temp\windowsbroker.ps1" -IIS
Usually I'd prefer switches over parameters with array arguments (unless you have a rather extensive list of options), because with the latter you have to worry about spelling of the array elements, whereas with switches you got a built-in spell check.
Using a Param() section will also automatically add a short usage description to your script:
PS C:\temp> Get-Help windowsbroker.ps1
windowsbroker.ps1 [-IIS]
You can further enhance this online help to your script via comment-based help.
Using parameters has a lot of other advantages on top of that (even though they probably aren't of that much use in your scenario). You can do parameter validation, make parameters mandatory, define default values, read values from the pipeline, make parameters depend on other parameters via parameter sets, and so on. See here and here for more information.
Yes, they are called positional parameters. You provide the parameters at the beginning of your script:
Param(
[string]$appToInstall
)
You could then write your script as follows:
switch ($appToInstall){
"IIS" {"Install IIS here"}
}

Execute a parameter passed into a powershell script as if it were a line in the script

I've got a wrapper powershell script that I'm hoping to use to automate a few things. It's pretty basic, and accepts a parameter that I want the script to run as if it were a line in the script. I absolutely cannot get it to work.
example:
param( [string[]] $p)
echo $p
# Adds the base cmdlets
Add-PSSnapin VMware.VimAutomation.Core
# Add the following if you want to do things with Update Manager
Add-PSSnapin VMware.VumAutomation
# This script adds some helper functions and sets the appearance. You can pick and choose parts of this file for a fully custom appearance.
. "C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI\Scripts\Initialize-VIToolkitEnvironment.ps1"
$p
In the example above, I want $p to execute as if it were a line in the script. I know this isn't secure, and that's probably where the problem lies.
Here is how I try running the script and passing in a parameter for $p:
D:\ps\test>powershell -command "D:\ps\test\powershell_wrapper.ps1" 'Suspend-VM servername -Verbose -Confirm:$False'
How can I get my parameter of 'Suspend-VM servername -Verbose -Confirm:$False' to run inside my script? If I just include the value in the script instead of pass it in as a parameter it runs without any issues...
You can basically approach this two ways, depending on what your needs really are and how you want to structure your code.
Approach #1 - Invoke-Expression
Invoke-Expression basically allows you to treat a string like an expression and evaluate it. Consider the following trivial example:
Invoke-Expression '{"Hello World"}'
That will evaluate the string as if it were an expression typed in directly, and place the string "Hello World" on the pipeline. You could use that to take your string parameter and run it on-the-fly in your script.
Approach #2 - Using a ScriptBlock
PowerShell has a special data type called a ScriptBlock, where you can bind a script to a variable, and then invoke that script as part of your code. Again, here is a trivial example:
function Test-SB([ScriptBlock]$sb) {
$sb.Invoke()
}
Test-SB -sb {"Hello World"}
This example creates a function with a single parameter $sb that is of type ScriptBlock. Notice the parameter is bound to the actual chunk of code {"Hello World"}? That code is assigned to the $sb parameter, and then a call to the .Invoke method actually executes the code. You could adapt your code to take in a ScriptBlock and invoke it as part of your script.
Approach #3 - Updating your profile
OK, so I said there were two ways to approach it. There is actually a third... sort of... You could add the VMWare cmdlets to your $profile so they are always present and you don't need your wrapper to load in those libraries. Granted, this is a pretty big hammer - but it might make sense if this is the environment you are constantly working in. You could also just create a shortcut to PowerShell that runs a .ps1 on startup that includes those libraries and hangs around (this is what MS did with the SharePoint admin shell and several others). Take a look at this TechNet page to get more info on the $profile and if it can help you out:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb613488.aspx