If you were to make a tool that:
sys-admins would use (e.g. system monitoring or backup/recovery tools)
has to be script-able on Windows
Would you make the tool:
A command-line interface tool?
PowerShell cmdlet?
GUI tool with public API?
I heard PowerShell is big among sys-admins, but I don't know how big compared to CLI tools.
PowerShell.
With PowerShell you have your choice of creating reusable commands in either PowerShell script or as a binary PowerShell cmdlet. PowerShell is specifically designed for commmand line interfaces supporting output redirection as well as easily launching EXE's and capturing their output. One of the best parts about PowerShell IMO is that it standardizes and handles parameter parsing for you. All you have to do is declare the parameters for your command and PowerShell provides the parameter parsing code for you including support for typed, optional, named, positional, mandatory, pipeline bound, etc. For example, the following function declarations shows this in action:
function foo($Path = $(throw 'Path is required'), $Regex, [switch]$Recurse)
{
}
# Mandatory
foo
Path is required
# Positional
foo c:\temp '.*' -recurse
# Named - note fullname isn't required - just enough to disambiguate
foo -reg '.*' -p c:\temp -rec
PowerShell 2.0 advanced functions provide even more capabilities such as parameter aliases -CN alias for -ComputerName, parameter validation [ValidateNotNull()] and doc comments for usage and help e.g.:
<#
.SYNOPSIS
Some synopsis here.
.DESCRIPTION
Some description here.
.PARAMETER Path
The path to the ...
.PARAMETER LiteralPath
Specifies a path to one or more locations. Unlike Path, the value of
LiteralPath is used exactly as it is typed. No characters are interpreted
as wildcards. If the path includes escape characters, enclose it in single
quotation marks. Single quotation marks tell Windows PowerShell not to
interpret any characters as escape sequences.
.EXAMPLE
C:\PS> dir | AdvFuncToProcessPaths
Description of the example
.NOTES
Author: Keith Hill
Date: June 28, 2010
#>
function AdvFuncToProcessPaths
{
[CmdletBinding(DefaultParameterSetName="Path")]
param(
[Parameter(Mandatory=$true, Position=0, ParameterSetName="Path",
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true,
HelpMessage="Path to bitmap file")]
[ValidateNotNullOrEmpty()]
[string[]]
$Path,
[Alias("PSPath")]
[Parameter(Mandatory=$true, Position=0, ParameterSetName="LiteralPath",
ValueFromPipelineByPropertyName=$true,
HelpMessage="Path to bitmap file")]
[ValidateNotNullOrEmpty()]
[string[]]
$LiteralPath
)
...
}
See how the attributes give you finer grained control over PowerShell's parameter parsing engine. Also note the doc comments that can be used for both usage and help like so:
AdvFuncToProcessPaths -?
man AdvFuncToProcessPaths -full
This is really quite powerful and one of the main reasons I stopped writing my own little C# utility exes. The parameter parsing wound up being 80% of the code.
I always create a command line tool first:
It is far easier to automate / incorporate into scripts than a GUI (much less work than producing an API)
It will run in pretty much all windows machines (including older machines without Power shell installed)
Although power shell is a great tool for sysadmins, I don't yet think it is widely spread enough to escape the need to also produce a traditional command line tool as well - hence I always make a command line tool first (although I might also choose to go onto produce a PowerShell cmdlet).
Similarly, although a well thought out API may be easier for scripting, your API will place restrictions on what languages users can script in and so it is always a good idea to additionally provide a command line tool as a fallback / easy alternative.
Python.
Ideally suited for command-line applications and system administration. Easier to code than most shells. Also, runs faster than most shells.
Related
So this was my script:
$ans=Read-Host "What process would you like to query?"
Get-WmiObject win32-process -Filter "name='$ans'" | Format-Table HandleCount,VirtualSize,UserModeTime,KernelModeTime,ProcessID,Name
Now I need to create a script which requires the argument be passed when the script is executed. I'm a little confused on how to do this successfully. This is what I'm trying to work with:
#!/bin/bash
echo $1
Get-WmiObject win32_process -Filter "name='$1'" | Format-Table HandleCount,VirtualSize,UserModeTime,ProcessID,Name
To make a parameter mandatory (required) in PowerShell, you must use an advanced script or function; to create a script file, save your code in a .ps1 file[1].
param(
[Parameter(Mandatory)]
[string] $Name
)
Get-CimInstance win32_process -Filter "name='$Name'"
Note:
The code uses Get-CimInstance instead of Get-WmiObject, because the CIM cmdlets superseded the WMI cmdlets in PowerShell v3 (released in September 2012). Therefore, the WMI cmdlets should be avoided, not least because PowerShell [Core] (version 6 and above), where all future effort will go, doesn't even have them anymore. For more information, see this answer.
The Format-Table call was intentionally omitted, because Format-* cmdlets should only ever be used to format data for display, never for subsequent programmatic processing, i.e. never for outputting data - see this answer.
Outputting just data means that PowerShell controls in what format your data is displayed; for information on how to control this format, see this answer.
[1] This is enough to make a plain-text file executable from PowerShell, without needing to include the .ps1 extension in the invocation. On Unix-like platforms, you can create an executable shell script without a filename extension via a shebang line such as #!/usr/bin/env pwsh and chmod a+x some that can be called from outside PowerShell as well.
I have this function:
function traced()
{
write-host "$args"
invoke-expression -Command "$args"
}
and I use it in several places like traced cp "$($_.FullName)" (join-path $directory $newfile) so that I have a log of all of the places that get copied (or removed, or whatever)
But when the directory contains spaces and dashes, it results in invoke-expression throwing.
I guess I could just define traced-cp and traced-rm, but if I have a lot of functions I want to trace, what's the generic answer? I just want a function that prints, then evaluates, the exact command its given. From what I understand, the & operator isn't what I want here-- It won't work for shell builtins.
[...] so that I have a log of all of the places that get copied (or removed, or whatever)
I'd strongly recommend you use transcript logging for this!
You can start a transcript interactively with the Start-Transcript cmdlet, but if you want to keep a transcript of every single instance of PowerShell you launch by default, I'd suggest turning it on by default!
Open the local policy editor (gpedit.msc) on your Windows box and navigate to:
Computer Configuration
> Administrative Templates
> Windows Components
> Windows PowerShell
Select the policy setting named "Turn on PowerShell Transcription", set it to Enabled, and optionally configure your preferred output directory (defaults to your home folder).
This way, you'll always have a full transcript of your interactions in PowerShell :)
Consider using argument splatting to build your command instead of building a string-based command with Invoke-Expression. I also don't know where you heard that & doesn't work with shell built-ins but it works with both commands and cmdlets.
Here is the official Microsoft documentation on splatting in Powershell.
This approach also eliminates the difficulty in crafting a command string correctly, having to escape characters, and dealing with path spaces - using splatting with named or positional arguments takes care of most of this for you.
I would suggest using -verbose with copy-item or remove-item, and also -passthru with copy-item.
I am newbie in PowerShell and I am searching for a way to make the script more dynamic
As an example in the script file I have this line
cd C:\Users\Future\Desktop
How can I make the path dynamic ...? I mean to let the other people who will take this script file to run it without changing the username in this line?
You can either add a parameter to the script or use the USERPROFILE variable:
cd (Join-Path $env:USERPROFILE 'Desktop')
To expand upon #Martin Brandl's answer, I would suggest going the Parameter route. You can set a default value for your own use while also allowing people to specify a different path when they run the script. As a small example:
[CmdletBinding()]
param(
[string]$Path = "C:\Users\Future\Desktop"
)
Set-Location $Path
If you use the Mandatory parameter setting it will require someone to input a Path each time the script is run which is similar to using Read-Host
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[string]$Path
)
Set-Location $Path
There are other parameter settings you can use for validation purposes.
I would recommend looking through this page for more information on to set up functions as it describes a lot of the options you can use in parameters.
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_functions?view=powershell-6
I have a PowerShell script file stored in an internal artifact server. The script URL is http://company-server/bootstrap.ps1.
What is a concise way to download that script and execute with a custom parameter?
I want to send such a command to users, who will copy-paste it over and over, so it must be a single-line and should be short.
What I currently have works, but it is long and unwieldy:
$c=((New-Object System.Net.WebClient).DownloadString('http://company-server/bootstrap.ps1'));Invoke-Command -ScriptBlock ([Scriptblock]::Create($c)) -ArgumentList 'RunJob'
I am wondering if there is shorter way to do this.
Note: From a code golf perspective, the solutions below could be shortened further, by eliminating insignificant whitespace; e.g., &{$args[0]}hi instead of & { $args[0] } hi. However, in the interest of readability such whitespace was kept.
A short formulation of a command that downloads a script via HTTP and executes it locally, optionally with arguments is probably this, taking advantage of:
alias irm for Invoke-RestMethod, in lieu of (New-Object System.Net.WebClient).DownloadString()
omitting quoting where it isn't necessary
relying on positional parameter binding
& ([scriptblock]::Create((irm http://company-server/bootstrap.ps1))) RunJob
RunJob is the OP's custom argument to pass to the script.
An even shorter, but perhaps more obscure approach is to use iex, the built-in alias for Invoke-Expression, courtesy of this GitHub comment.
iex "& { $(irm http://company-server/bootstrap.ps1) } RunJob"
As an aside: in general use, Invoke-Expression should be avoided.
The command uses an expandable string ("...", string interpolation) to create a string with the remote script's content enclosed in a script block { ... }, which is then invoked in a child scope (&). Note how the arguments to pass to the script must be inside "...".
However, there is a general caveat (which doesn't seem to be a problem for you): if the script terminates with exit, the calling PowerShell instance is exited too.
There are two workarounds:
Run the script in a child process:
powershell { iex "& { $(irm http://company-server/bootstrap.ps1) } RunJob" }
Caveats:
The above only works from within PowerShell; from outside of PowerShell, you must use powershell -c "..." instead of powershell { ... }, but note that properly escaping embedded double quotes, if needed (for a URL with PS metacharacters and/or custom arguments with, say, spaces), can get tricky.
If the script is designed to modify the caller's environment, the modifications will be lost due to running in a child process.
Save the script to a temporary file first:
Note: The command is spread across multiple lines for readability, but it also works as a one-liner:
& {
$f = Join-Path ([IO.Path]::GetTempPath()) ([IO.Path]::GetRandomFileName() + '.ps1');
irm http://company-server/bootstrap.ps1 > $f;
& $f RunJob;
ri $f
}
The obvious down-side is that the command is much longer.
Note that the command is written with robustness and cross-platform compatibility in mind, so that it also works in PowerShell Core, on all supported platforms.
Depending on what platforms you need to support / what assumptions you're willing to make (e.g., that the current dir. is writeable), the command can be shortened.
Potential future enhancements
GitHub issue #5909, written as of PowerShell Core 6.2.0-preview.4 and revised as of PowerShell Core 7.0, proposes enhancing the Invoke-Command (icm) cmdlet to greatly simplify download-script-and-execute scenarios, so that you could invoke the script in question as follows:
# WISHFUL THINKING as of PowerShell Core 7.0
# iwr is the built-in alias for Invoke-WebRequest
# icm is the built-in alias for Invoke-Command.
iwr http://company-server/bootstrap.ps1 | icm -Args RunJob
GitHub issue #8835 goes even further, suggesting an RFC be created to introduce a new PowerShell provider that allows URLs to be used in places where only files were previously accepted, enabling calls such as:
# WISHFUL THINKING as of PowerShell Core 7.0
& http://company-server/bootstrap.ps1 RunJob
However, while these options are very convenient, there are security implications to consider.
Here is a shorter solution (158 chars.)
$C=(New-Object System.Net.WebClient).DownloadString("http://company-server/bootstrap.ps1");icm -ScriptBlock ([Scriptblock]::Create($c)) -ArgumentList "RunJob"
Here is 121
$C=(curl http://company-server/bootstrap.ps1).content;icm -ScriptBlock ([Scriptblock]::Create($c)) -ArgumentList "RunJob"
Here is 108
$C=(curl http://company-server/bootstrap.ps1).content;icm ([Scriptblock]::Create($c)) -ArgumentList "RunJob"
Here is 98
$C=(iwr http://company-server/bootstrap.ps1).content;icm -sc([Scriptblock]::Create($c)) -ar RunJob
Thanks to Ansgar Wiechers
I'm writing a PowerShell script for a particular client. I tested the script on several of our desktop machines, and the tried the script on the client's system. I ran into the following issues:
Issue #1
As a good little Do Bee, I defined a bunch of help text, so when you run Get-Help on my script, it would give you detailed help. I used the following syntax:
<#
Get-Help
.SYNOPSIS
C> FileWatcher.ps1 -FilePath <FileName> -SenderEmail Bob#Foo.com ^
-ToEmail Joe#Bar.com -SmtpServer smtp.foo.com
.DESCRIPTION
Blah, blah, blah...
#>
On my machine, it works, and this is recognized as comment. However, on the client's machine, this produced an error as soon as it saw the C> which it thought was a redirect. Getting rid of the #> and <# and putting # in front of each line got rid of this problem, and brought us Issue #2.
Issue #2
I defined a bunch of parameters like this:
Param (
[ValidateScript({Test-Path $_ -PathType 'Leaf'})]
[Parameter(
Position=0,
HelpMessage="File you want to watch")]
$FilePath = "\\rke032\QuickCon\wincommlog.000",
[String]
[Parameter(
blah, blah, blah
PowerShell coughed on [ValidateScript({Test-Path $_ -PathType 'Leaf'})] saying it wasn't a valid type.
As I said, we tested this on a wide variety of Windows machines. I have a Windows XP machine. It's PowerShell version # 6.0.6002.1811. On another machine that's running Windows 7, the PowerShell version is 6.1.7600.
On the client's machine (a Windows 2008 Server) which is giving us these errors, the version is 6.0.6001.18000.
We ran the Powershell scripts by bringing up a PowerShell window, and then typing in the script's name. The ExecutionPolicy is set to Unrestricted. The script has a *.ps1 suffix on the end. I can't believe there's that big a difference between revision 6.0.6002 and 6.0.6001 to have a problem with unrecognized syntax. Is there something else going on?
Compare the output of $PSVersionTable and not the build version. In particular the PSVersion property is interesting. I guess you have PS1 on one machine and PS2 on another. The extension is .ps1 regardless of the PowerShell version.
This guess is reinforced by noticing that block comments don't work and neither do parameter attributes.