Provide parameter to PS script stored as a string - powershell

This is the command that I provided to the customer before:
iex ((New-Object System.Net.WebClient).DownloadString('URL'))
It downloads PS script as a string from URL location and executes the script.
Now I need to pass parameter to this script
iex (((New-Object System.Net.WebClient).DownloadString('URL')) -Parameter 'parameter')
And it is not working because string not accepts parameters
I need to create the simple command (it is important) that will download the script from URL and accept parameters. Otherwise I would save this string to ps1 file and execute it passing the parameter
Could you please help me with that? I am new to PS

Try the following approach:
& ([scriptblock]::Create(
(New-Object System.Net.WebClient).DownloadString('URL')
)) 'Parameter'
Note:
The above runs the script text in a child scope, due to use of &, the call operator, as would happen with local invocation of a script file, whereas Invoke-Expression (iex) runs it in the current scope, i.e. as if it were called with . , the dot-sourcing operator. Change & to . if you want this behavior.
To download the script, you could alternatively use PowerShell's Invoke-RestMethod (irm) cmdlet: Invoke-RestMethod 'URL' or irm 'URL'
[scriptblock]::Create() creates a PowerShell script block, which can then be invoked with & (or .), while also accepting arguments.
As for what you tried:
Invoke-Expression (iex) - which should generally be avoided - has several drawbacks when it comes to executing a script file's content downloaded from the web:
It doesn't support passing arguments, as you've discovered.
An exit statement in the script text would cause the current PowerShell session to exit as a whole.
As noted, the code executes directly in the caller's scope, so that its variables, functions, ... linger in the session after execution (however, it's easy to avoid that with & { iex '...' }).
GitHub issue #5909 discusses enhancing the Invoke-Command cmdlet to robustly support download and execution of scripts directly from the web.

Related

Invoke-command and running ps1 with parameters

I'm trying to run a script using invoke-command to install defender for endpoint with some associated parameters.
If I run a standard ps1 using invoke-command it works with no issues. However, if I run the following:
Invoke-Command -ComputerName NAME -FilePath \\srv\share\install.ps1 -OnboardingScript \\srv\share\WindowsDefenderATPonboardingscript.cmd -Passive
I receive "A parameter cannot be found that matches parameter name 'OnboardingScript'". Can someone please help me understand how I invoke a command and run a script with parameters?
Parameters already defined in the install.Ps1 file
https://github.com/microsoft/mdefordownlevelserver/blob/main/Install.ps1
Many thanks in advance
Your Invoke-Command call has a syntax problem, as Santiago Squarzon points out:
Any pass-through arguments - those to be seen by the script whose path is passed to -FilePath - must be specified via the -ArgumentList (-Args) parameter, as an array.
# Simplified example with - of necessity - *positional* arguments only.
# See below.
Invoke-Command -ComputerName NAME -FilePath .\foo.ps1 -Args 'bar', 'another arg'
The same applies to the more common invocation form that uses a script block ({ ... }), via the (potentially positionally implied) -ScriptBlock parameter.
However, there's a catch: Only positional arguments can be passed that way, which:
(a) requires that the target script support positional argument binding for all arguments of interest...
(b) ... which notably precludes passing switch parameters (type [switch]), such as -Passive in your call.
(c) requires you to pass the invariably positional arguments in the correct order.
Workaround:
Use a -ScriptBlock-based invocation, which allows for regular argument-passing with the usual support for named arguments (including switches):
If, as in your case, the script file is accessible by a UNC path visible to the remote session as well, you can simply call it from inside the remote script block.
Note: It isn't needed in your case, but you generally may need $using: references in order to incorporate values from the local session into the arguments - see further below for an example.
Invoke-Command -ComputerName NAME {
& \\srv\share\install.ps1 -OnboardingScript \\srv\share\WindowsDefenderATPonboardingscript.cmd -Passive
}
Otherwise (typically, a script file local to the caller):
Use a $using: reference to pass the content (source code) of your script file to the remote session, parse it into a script block there, and execute that script block with the arguments of interest :
$scriptContent = Get-Content -Raw \\srv\share\install.ps1
Invoke-Command -ComputerName NAME {
& ([scriptblock]::Create($using:scriptContent)) -OnboardingScript \\srv\share\WindowsDefenderATPonboardingscript.cmd -Passive
}
Small caveat: Since the original script file's source code is executed in memory in the remote session, file-related reflection information won't be available, such as the automatic variables that report a script file's full path and directory path ($PSCommandPath and $PSScriptRoot).
That said, the same applies to use of the -FilePath parameter, which essentially uses the same technique of copying the source code rather than a file to the remote session, behind the scenes.
thanks for your reply. I have managed to get this working by adding -ScriptBlock {. "\srv\share etc}

Why PowerShell.exe There is no way to dot source a script?

The help document says that the script will be executed in 'dot-sourced' mode, but it doesn't. why?
I read the full text of the help document and couldn't find the reason, so I came for help.
PS> PowerShell.exe -File '.\dot-source-test.ps1'
PS> $theValue
PS> . '.\dot-source-test.ps1'
PS> $theValue
theValue
PS>
The content of 'dot-source-test.ps1' is $theValue = 'theValue'.
If the value of File is a file path, the script runs in the local scope ("dot-sourced"), so that the functions and variables that the script creates are available in the current session.
about_PowerShell_exe - PowerShell | Microsoft Docs
To prevent conceptual confusion:
In order to dot-source a script, i.e. execute it directly in the caller's scope (as opposed to a child scope, which is the default), so that the script's variables, function definitions, ... are seen by the caller:
In the current PowerShell session, just use ., the dot-sourcing operator, directly:
# Dot-source in the caller's scope.
# When executed at the prompt in an interactive PowerShell session,
# the script's definitions become globally available.
. '.\dot-source-test.ps1'
Via powershell.exe, the Windows PowerShell CLI[1]:
Note: Whatever dot-sourcing you perform this way is limited to the child process in which powershell.exe runs and its PowerShell session; it has no impact on the caller's session.
Dot-sourcing via the CLI makes sense only in two scenarios:
Scenario A: You're passing commands via the (possibly positionally implied) -Command (-c) parameter that relies on definitions that must first be dot-sourced from a script file, and you want the session to exit automatically when the commands have finished executing.
Scenario B: You're entering a (possibly nested) interactive PowerShell session into which you want to dot-source (pre-load) definitions from a script file; as any interactive session, you will need to exit it manually, typically with exit.
Scenario A: Pre-load definitions, execute commands that rely on them, then exit:
The following starts a (new) PowerShell session as follows:
Script file .\dot-source-test.ps1 is dot-sourced, which defines variable $theValue in the caller's (here: the global) scope.
The value of $theValue is output.
The new session is automatically exited on completing the commands.
PS> powershell -c '. .\dot-source-test.ps1; $theValue'
theValue
Scenario B: Enter a (new) interactive session with pre-loaded definitions:
Simply add the -noexit switch in order to enter an interactive session in which script file .\dot-source-test.ps1 has been dot-sourced:
powershell -noexit -c '. .\dot-source-test.ps1'
# You're now in a (new) interactive session in which $theValue is defined,
# and which you must eventually exit manually.
Note:
If neither -File nor a command (via explicit or implied -Command / -c) are specified, -noexit is implied.
Because -c is needed here for dot-sourcing, -noexit must be specified to keep the session open.
While using -File for dot-sourcing instead - powershell -noexit -File '.\dot-source-test.ps1' - works too, I suggest avoiding it for conceptual reasons:
While it is technically true that a script passed to -File is dot-sourced in the new session, that is (a) unexpected, given that scripts executed from inside a session are not (they run in a child scope) and (b) by far the most typical use case for -File is to execute a given script and then exit - in which case the aspect of dot-sourcing is irrelevant.
As such, it is better to think of this behavior as an implementation detail, and it is unfortunate that the CLI help mentions it so prominently - causing the confusion that prompted this question.
[1] The same applies analogously to the PowerShell [Core] 7+ CLI, pwsh, except that it defaults to -File rather than -Command.
It's about the way the path to the file is passed through the command line. See the below example when used in Command Prompt. (test.ps1 contains the line $theValue = 'theValue')
Without using the "-File" toggle, it's treated differently, as an argument to be passed to the PowerShell process being triggered.
Seeing the same thing when calling in PowerShell.
The specific part you reference is under the " -File" toggle, which needs to be used.
If the value of File is "-", the command text is read from standard input. Running powershell -File - without redirected standard input starts a regular session. This is the same as not specifying the File parameter at all.
If the value of File is a file path, the script runs in the local scope ("dot-sourced"), so that the functions and variables that the script creates are available in the current session.
(source: Microsoft Docs > About PowerShell.exe)

Executing a powershell script from another script with $PSScriptRoot?

I'm wondering what I'm missing here. I have a powershell script that calls another script with some parameters to execute as a way to keep things tidy. Here is what works:
C:\Scripts\Project\coolscript.ps1 -projname 'my.project' -domain 'work'
I want others to be able to use this script without having to change anything, so I thought I could make the path relative instead of the full one starting from C: so I thought I could execute the script like this:
$pathname = $PSScriptRoot + '\coolscript.ps1'
$pathname -projname 'my.project' -domain 'work'
however I get an error that says 'unexpected token in expression or statement for everything after $pathname
ANy ideas what I'm missing? Thank you
Use the Call operator (&) as follows:
& $pathname -projname 'my.project' -domain 'work'
Call operator &
Runs a command, script, or script block. The call operator, also known as the "invocation operator," lets you run commands that are
stored in variables and represented by strings or script blocks. The
call operator executes in a child scope. For more about scopes, see
about_scopes.

Powershell function call causes missing function error using powershell v7 on windows 10

I wrote a script to build all .net projects in a folder.
Issue
The issue is I am getting a missing function error when I call Build-Sollution.
What I tried
I made sure that function was declared before I used it so I am not really sure why it saids that it is not defined.
I am new to powershell but I would think a function calling another functions should work like this?
Thanks in advance!
Please see below for the error message and code.
Error Message
Line |
3 | Build-Sollution $_
| ~~~~~~~~~~~~~~~
The term 'Build-Sollution' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Build-Sollution:
Code
param (
#[Parameter(Mandatory=$true)][string]$plugin_path,
[string]$depth = 5
)
$plugin_path = 'path/to/sollutions/'
function Get-Sollutions {
Get-ChildItem -File -Path $plugin_path -Include *.sln -Recurse
}
function Build-Sollution($solution) {
dotnet build $solution.fullname
}
function Build-Sollutions($solutions) {
$solutions | ForEach-Object -Parallel {
Build-Sollution $_
}
}
$solutions_temp = Get-Sollutions
Build-Sollutions $solutions_temp
From PowerShell ForEach-Object Parallel Feature | PowerShell
Script blocks run in a context called a PowerShell runspace. The runspace context contains all of the defined variables, functions and loaded modules.
...
And each runspace must load whatever module is needed and have any variable be explicitly passed in from the calling script.
So in this case, the easiest solution is to define Build-Sollution inside Build-Sollutions
As for this...
I am new to powershell but I would think a function calling another
functions should work like this?
... you cannot use the functions until you load your code into memory. You need to run the code before the functions are available.
If you are in the ISE or VSCode, if the script is not saved, Select All and hit use the key to run. In the ISE use F8 Selected, F5 run all. In VSCode, F8 run selected, crtl+F5 run all. YOu can just click the menu options as well.
If you are doing this from the consolehost, the run the script using dot sourcing.
. .\UncToYourScript.ps1
It's ok to be new, we all started somewhere, but it's vital that you get ramped up first. so, beyond what I address here, be sure to spend time on Youtube and search for Beginning, Intermediate, Advanced PowerShell for videos to consume. There are tons of free training resources all over the web and using the built-in help files would have given you the answer as well.
about_Scripts
SCRIPT SCOPE AND DOT SOURCING Each script runs in its own scope. The
functions, variables, aliases, and drives that are created in the
script exist only in the script scope. You cannot access these items
or their values in the scope in which the script runs.
To run a script in a different scope, you can specify a scope, such as
Global or Local, or you can dot source the script.
The dot sourcing feature lets you run a script in the current scope
instead of in the script scope. When you run a script that is dot
sourced, the commands in the script run as though you had typed them
at the command prompt. The functions, variables, aliases, and drives
that the script creates are created in the scope in which you are
working. After the script runs, you can use the created items and
access their values in your session.
To dot source a script, type a dot (.) and a space before the script
path.
See also:
'powershell .net projects build run scripts'
'powershell build all .net projects in a folder'
Simple build script using Power Shell
Update
As per your comments below:
Sure the script should be saved, using whatever editor you choose.
The ISE does not use PSv7 by design, it uses WPSv5x and earlier.
The editor for PSv7 is VSCode. If you run a function that contains another function, you have explicitly loaded everything in that call, and as such it's available.
However, you are saying, you are using PSv7, so, you need to run your code in the PSv7 consolehost or VSCode, not the ISE.
Windows PowerShell (powershell.exe and powershell_ise.exe) and PowerShell Core (pwsh.exe) are two different environments, with two different executables, designed to run side-by-side on Windows, but you do have to explicitly choose which to use or write your code to branch to a code segment to execute relative to the host you started.
For example, let's say I wanted to run a console command and I am in the ISE, but I need to run that in Pwsh. I use a function like this that I have in a custom module autoloaded via my PowerShell profiles:
# Call code by console executable
Function Start-ConsoleCommand
{
[CmdletBinding(SupportsShouldProcess)]
[Alias('scc')]
Param
(
[string]$ConsoleCommand,
[switch]$PoSHCore
)
If ($PoSHCore)
{Start-Process pwsh -ArgumentList "-NoExit","-Command &{ $ConsoleCommand }" -PassThru -Wait}
Else {Start-Process powershell -ArgumentList "-NoExit","-Command &{ $ConsoleCommand }" -PassThru -Wait}
}
All this code is doing is taking whatever command I send it and if I use the PoSHCore switch...
scc -ConsoleCommand 'SomeCommand' -PoSHCore
... it will shell out to PSCore, run the code, otherwise, it just runs from the ISE>
If you want to use the ISE with PSv7 adn not do the shell out thing, you need to force the ISE to use PSv7 to run code. See:
Using PowerShell Core 6 and 7 in the Windows PowerShell ISE

What is shortest possible way to download script from HTTP and run it with parameters using Powershell?

I have a PowerShell script file stored in an internal artifact server. The script URL is http://company-server/bootstrap.ps1.
What is a concise way to download that script and execute with a custom parameter?
I want to send such a command to users, who will copy-paste it over and over, so it must be a single-line and should be short.
What I currently have works, but it is long and unwieldy:
$c=((New-Object System.Net.WebClient).DownloadString('http://company-server/bootstrap.ps1'));Invoke-Command -ScriptBlock ([Scriptblock]::Create($c)) -ArgumentList 'RunJob'
I am wondering if there is shorter way to do this.
Note: From a code golf perspective, the solutions below could be shortened further, by eliminating insignificant whitespace; e.g., &{$args[0]}hi instead of & { $args[0] } hi. However, in the interest of readability such whitespace was kept.
A short formulation of a command that downloads a script via HTTP and executes it locally, optionally with arguments is probably this, taking advantage of:
alias irm for Invoke-RestMethod, in lieu of (New-Object System.Net.WebClient).DownloadString()
omitting quoting where it isn't necessary
relying on positional parameter binding
& ([scriptblock]::Create((irm http://company-server/bootstrap.ps1))) RunJob
RunJob is the OP's custom argument to pass to the script.
An even shorter, but perhaps more obscure approach is to use iex, the built-in alias for Invoke-Expression, courtesy of this GitHub comment.
iex "& { $(irm http://company-server/bootstrap.ps1) } RunJob"
As an aside: in general use, Invoke-Expression should be avoided.
The command uses an expandable string ("...", string interpolation) to create a string with the remote script's content enclosed in a script block { ... }, which is then invoked in a child scope (&). Note how the arguments to pass to the script must be inside "...".
However, there is a general caveat (which doesn't seem to be a problem for you): if the script terminates with exit, the calling PowerShell instance is exited too.
There are two workarounds:
Run the script in a child process:
powershell { iex "& { $(irm http://company-server/bootstrap.ps1) } RunJob" }
Caveats:
The above only works from within PowerShell; from outside of PowerShell, you must use powershell -c "..." instead of powershell { ... }, but note that properly escaping embedded double quotes, if needed (for a URL with PS metacharacters and/or custom arguments with, say, spaces), can get tricky.
If the script is designed to modify the caller's environment, the modifications will be lost due to running in a child process.
Save the script to a temporary file first:
Note: The command is spread across multiple lines for readability, but it also works as a one-liner:
& {
$f = Join-Path ([IO.Path]::GetTempPath()) ([IO.Path]::GetRandomFileName() + '.ps1');
irm http://company-server/bootstrap.ps1 > $f;
& $f RunJob;
ri $f
}
The obvious down-side is that the command is much longer.
Note that the command is written with robustness and cross-platform compatibility in mind, so that it also works in PowerShell Core, on all supported platforms.
Depending on what platforms you need to support / what assumptions you're willing to make (e.g., that the current dir. is writeable), the command can be shortened.
Potential future enhancements
GitHub issue #5909, written as of PowerShell Core 6.2.0-preview.4 and revised as of PowerShell Core 7.0, proposes enhancing the Invoke-Command (icm) cmdlet to greatly simplify download-script-and-execute scenarios, so that you could invoke the script in question as follows:
# WISHFUL THINKING as of PowerShell Core 7.0
# iwr is the built-in alias for Invoke-WebRequest
# icm is the built-in alias for Invoke-Command.
iwr http://company-server/bootstrap.ps1 | icm -Args RunJob
GitHub issue #8835 goes even further, suggesting an RFC be created to introduce a new PowerShell provider that allows URLs to be used in places where only files were previously accepted, enabling calls such as:
# WISHFUL THINKING as of PowerShell Core 7.0
& http://company-server/bootstrap.ps1 RunJob
However, while these options are very convenient, there are security implications to consider.
Here is a shorter solution (158 chars.)
$C=(New-Object System.Net.WebClient).DownloadString("http://company-server/bootstrap.ps1");icm -ScriptBlock ([Scriptblock]::Create($c)) -ArgumentList "RunJob"
Here is 121
$C=(curl http://company-server/bootstrap.ps1).content;icm -ScriptBlock ([Scriptblock]::Create($c)) -ArgumentList "RunJob"
Here is 108
$C=(curl http://company-server/bootstrap.ps1).content;icm ([Scriptblock]::Create($c)) -ArgumentList "RunJob"
Here is 98
$C=(iwr http://company-server/bootstrap.ps1).content;icm -sc([Scriptblock]::Create($c)) -ar RunJob
Thanks to Ansgar Wiechers