I am newbie in PowerShell and I am searching for a way to make the script more dynamic
As an example in the script file I have this line
cd C:\Users\Future\Desktop
How can I make the path dynamic ...? I mean to let the other people who will take this script file to run it without changing the username in this line?
You can either add a parameter to the script or use the USERPROFILE variable:
cd (Join-Path $env:USERPROFILE 'Desktop')
To expand upon #Martin Brandl's answer, I would suggest going the Parameter route. You can set a default value for your own use while also allowing people to specify a different path when they run the script. As a small example:
[CmdletBinding()]
param(
[string]$Path = "C:\Users\Future\Desktop"
)
Set-Location $Path
If you use the Mandatory parameter setting it will require someone to input a Path each time the script is run which is similar to using Read-Host
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[string]$Path
)
Set-Location $Path
There are other parameter settings you can use for validation purposes.
I would recommend looking through this page for more information on to set up functions as it describes a lot of the options you can use in parameters.
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_functions?view=powershell-6
Related
I'm using advanced installer for creating msi package I want copy some files and folders after installation completed to "[APPDIR]" (I know I can do this with add files and folder to files and folder section in advanced installer but I don't want to do that because my files and folder are dynamic in each installation in customer machine)
I write an inline PowerShell script like below
> Param( [string] $source, [string] $dest )
$exclude = #('web.config')
> Get-ChildItem $source -Recurse -Exclude $exclude | Copy-Item
> -Destination {Join-Path $dest $_.FullName.Substring($source.length)}
and in the parameter section, I fill like this "[SourceDir]Project", "[APPDIR]Project"
but it doesn't work. Why?
Abbas has since confirmed that the problem was one of command-line (parameter) syntax:
The parameter section - what to pass to the PowerShell script from Advanced Installer - was filled in as:
"[SourceDir]Project", "[APPDIR]Project" # !! WRONG, due to the comma
whereas it should have been:
"[SourceDir]Project" "[APPDIR]Project" # OK: *space-separated* arguments
Calling scripts/functions/cmdlets in PowerShell works as it does in shells, not as in programming languages; that is, you must separate the arguments being passed with spaces.
By contrast, using , between tokens constructs an array that is passed as a single argument.
From PowerShell run Get-Help about_Command_Syntax for more information.
It depends, you need to give more details. What are the execution settings for your PS custom action?
Have you checked the verbose log to see the params are passed correctly?
Your custom action should be scheduled as deferred with no impersonation, so it is executed after the APPDIR folder is created by the setup package and it has all the rights to write in that location.
Also, you should add rollback and uninstall custom actions to cleanup the files, as during an uninstalled or a canceled/failed installation, those resources will not be cleaned up by Windows Installer.
I'm tasked with writing a powershell script to perform a file download, which will eventually be executed as a scheduled task once a week. I have no background in programming in the windows environment so this has been an interesting day.
I am experiencing a problem with the unexpected handling of the $pwd and $home of the shell.
I pass into my program a download URL and a destination file. I would like the destination file to be a relative path, e.g., download/temp.txt.gz
param($srcUrl, $destFile)
$client = new-object System.Net.WebClient
$client.DownloadFile($srcUrl, $destFile)
Ungzip-File $destFile
Remove-Item $destFile
This actually fails on the call to Remove-Item. If $destFile is a relative path then the script happily downloads the file and puts it in a file relative to $home. Likewise, I then unzip this and my function Ungzip-File makes use of System.IO.Filestream, and it seems to find this file. Then Remove-Item complains that there is no file in the path relative to $pwd.
I am a bit baffled, as these are all part of the shell, so to speak. I'm not clear why these functions would handle the path differently and, more to the point, I'm not sure how to fix this so both relative and absolute paths work. I've tried looking at the io.path methods but since my $home and $pwd are on different drives, I can't even use the IsPathRooted which was seemed so close when I found it.
Any help?
You have to be aware of where you are in the path. $pwd works just fine on the command shell but let's say you have started your script from a scheduled job. You might think $pwd is where your script resides and code accordingly but find out that it actually uses, say %windir%\system32.
In general, I would use fullpath to destination and for paths relative to script folder I would use $PSScriptRoot/folder_name/file_path.
There are catches there too. For example, I noticed $PSScriptRoot will resolve just fine within the script but not within Param() block. I would highly suggest using write-verbose when coding and testing, so you know what it thinks the path is.
[CMDLETBINDING()] ## you need this!
Param()
write-verbose "path is $pwd"
Write-Verbose "removing $destFile"
Remove-Item $destfile
and add -verbose when you are calling your script/function:
myscript.ps1 -verbose
mydownloadfunction -verbose
To use a relative path, you need to specify the current directory with ./
Example: ./download/temp.txt.gz
You can also change your location in the middle of the script with Set-Location (alias: cd)
I would like the second function call in this script to throw an error:
function Deploy
{
param(
[Parameter(Mandatory=$true)]
[ValidateNotNullOrEmpty()]
[string]$BuildName
)
Write-Host "Build name is: $BuildName"
}
Deploy "Build123"
Deploy #Currently prompts for input
Prompting is great for using the script interactively, but this will also be executed by our build server.
Is my best bet just doing some custom validation with an if or something?
Once the parameter is marked as mandatory PowerShell will prompt for value. That said, if you remove the mandatory attribute then you can set a default value with a throw statement:
function Deploy
{
param(
[Parameter()]
[ValidateNotNullOrEmpty()]
[string]$BuildName=$(throw "BuildName is mandatory, please provide a value.")
)
Write-Host "Build name is: $BuildName"
}
#Emperor XLII has a nice comment in the question that I think can be a better answer for some use cases:
if you run powershell.exe with the -NonInteractive flag, missing
mandatory parameters will cause an error and result in a non-zero exit
code for the process.
The reasons to use this can be:
You have a lot of such Mandatory=$true parameters and the cost is high to convert all of them.
The script will be used both interactively and non-interactively, and when run interactively you do want to be prompted for missing parameters.
I have written a powershell script. the code has paths related to only my PC.
Now the same code cannot be executed by another person on his machine because the path is diff. Therefore please let me know a way where my code can work on all machines.
It depends on the paths. If they're to programs in \Program Files perhaps you can use the environment variable $env:ProgramFiles in your path spec. You can also parameterize your script to take the path like so:
param($path)
# rest of script ...
Note that the param() statement must be the first non-comment line in your script.
You could also use the special $MyInvocation variable available to running scripts. It has access to the path the script was executed from, among other things.
For example a script I use has this line:
$InputCSV = (split-path $myinvocation.mycommand.path) + "\filename.csv"
Which means no matter where the script is run from it will know to grab the CSV file from the same place.
If you were to make a tool that:
sys-admins would use (e.g. system monitoring or backup/recovery tools)
has to be script-able on Windows
Would you make the tool:
A command-line interface tool?
PowerShell cmdlet?
GUI tool with public API?
I heard PowerShell is big among sys-admins, but I don't know how big compared to CLI tools.
PowerShell.
With PowerShell you have your choice of creating reusable commands in either PowerShell script or as a binary PowerShell cmdlet. PowerShell is specifically designed for commmand line interfaces supporting output redirection as well as easily launching EXE's and capturing their output. One of the best parts about PowerShell IMO is that it standardizes and handles parameter parsing for you. All you have to do is declare the parameters for your command and PowerShell provides the parameter parsing code for you including support for typed, optional, named, positional, mandatory, pipeline bound, etc. For example, the following function declarations shows this in action:
function foo($Path = $(throw 'Path is required'), $Regex, [switch]$Recurse)
{
}
# Mandatory
foo
Path is required
# Positional
foo c:\temp '.*' -recurse
# Named - note fullname isn't required - just enough to disambiguate
foo -reg '.*' -p c:\temp -rec
PowerShell 2.0 advanced functions provide even more capabilities such as parameter aliases -CN alias for -ComputerName, parameter validation [ValidateNotNull()] and doc comments for usage and help e.g.:
<#
.SYNOPSIS
Some synopsis here.
.DESCRIPTION
Some description here.
.PARAMETER Path
The path to the ...
.PARAMETER LiteralPath
Specifies a path to one or more locations. Unlike Path, the value of
LiteralPath is used exactly as it is typed. No characters are interpreted
as wildcards. If the path includes escape characters, enclose it in single
quotation marks. Single quotation marks tell Windows PowerShell not to
interpret any characters as escape sequences.
.EXAMPLE
C:\PS> dir | AdvFuncToProcessPaths
Description of the example
.NOTES
Author: Keith Hill
Date: June 28, 2010
#>
function AdvFuncToProcessPaths
{
[CmdletBinding(DefaultParameterSetName="Path")]
param(
[Parameter(Mandatory=$true, Position=0, ParameterSetName="Path",
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true,
HelpMessage="Path to bitmap file")]
[ValidateNotNullOrEmpty()]
[string[]]
$Path,
[Alias("PSPath")]
[Parameter(Mandatory=$true, Position=0, ParameterSetName="LiteralPath",
ValueFromPipelineByPropertyName=$true,
HelpMessage="Path to bitmap file")]
[ValidateNotNullOrEmpty()]
[string[]]
$LiteralPath
)
...
}
See how the attributes give you finer grained control over PowerShell's parameter parsing engine. Also note the doc comments that can be used for both usage and help like so:
AdvFuncToProcessPaths -?
man AdvFuncToProcessPaths -full
This is really quite powerful and one of the main reasons I stopped writing my own little C# utility exes. The parameter parsing wound up being 80% of the code.
I always create a command line tool first:
It is far easier to automate / incorporate into scripts than a GUI (much less work than producing an API)
It will run in pretty much all windows machines (including older machines without Power shell installed)
Although power shell is a great tool for sysadmins, I don't yet think it is widely spread enough to escape the need to also produce a traditional command line tool as well - hence I always make a command line tool first (although I might also choose to go onto produce a PowerShell cmdlet).
Similarly, although a well thought out API may be easier for scripting, your API will place restrictions on what languages users can script in and so it is always a good idea to additionally provide a command line tool as a fallback / easy alternative.
Python.
Ideally suited for command-line applications and system administration. Easier to code than most shells. Also, runs faster than most shells.