Jenkins Windows Batch Command Powershell Environment Variables - powershell

I have a PowerShell script to be executed as a build step in jenkins and need to pass environment variable to it
powershell -File .\Build.ps1 -Version $env:APP_VERSION_NUMBER
The APP_VERSION_NUMBER is an environment variable set by Version Number Plugin of Jenkins.
For some reason the -Version parameter is never set, and I see only $env:APP_VERSION_NUMBER in console log output.
Is this a syntax issue?

When you use the PowerShell CLI's -File parameter, the arguments passed to the script are treated as literals, so, given that you're invoking the command line not from PowerShell, $env:APP_VERSION_NUMBER is not expanded.
To force the target PowerShell process to evaluate the arguments, you must use -Command rather than -File:
powershell -Command .\Build.ps1 -Version $env:APP_VERSION_NUMBER
However, now that we know that you're invoking the command line via cmd.exe (a batch file) from Jenkins (a build step of type Execute Windows batch command), the simpler answer is indeed to let cmd.exe expand the environment-variable reference, using its %<envVarName>% syntax:
powershell -File .\Build.ps1 -Version "%APP_VERSION_NUMBER%"
Note: Enclosing the environment-variable reference in "..." isn't strictly necessary with a version number, but is a good habit to form, so that values with embedded spaces or other shell metacharacters are passed correctly too.

It turns out indeed its a syntax issue. The fix looks like the following
powershell -File ".\Build.ps1" -Version %APP_VERSION_NUMBER%

Related

Run powershell script as administrator via batch file with parameter passing

When I run the script, without an administrator, via batch file it passes the parameter, but when I run the script, as an administrator, it does not pass the parameter.
I'm trying the command in the link below, but with no success:
run-script-within-batch-file-with-parameters
Command that executes the script, as an administrator, via batch file:
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File "D:\z_Batchs e Scripts\Batchs\Normaliza_LUFS\ArqsNorms_LUFS_pass.ps1' '%_vLUF%' -Verb RunAs}"
The %_vLUF% is the parameter to be passed.
Error message:
No line:1 character:4
+ & {Start-Process PowerShell -ArgumentList '-NoProfile -ExecutionPolic ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Start-Process], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.StartProcessCommand
Command in powershell script to receive the parameter:
Param(
[decimal]$env:_vLUF
)
What could be wrong, the command in the batch file or in the powershell script?
Test:
When the script is executed, without being an administrator, via batch file and the Parameter in the powershell script is defined as:
Parameter in powershell:
Param(
[decimal]$env:_vLUF
)
Command in the batch file running the script without being an administrator:
powershell.exe -executionpolicy remotesigned -File "D:\z_Batchs e Scripts\Batchs\Normaliza_LUFS\ArqsNorms_LUFS_pass.ps1" %_vLUF%
Note:
No need to use a named argument with the target parameter name.
Result:
Conclusion:
When the script is running, without being an administrator, via a batch file it works correctly even if the parameter used in the script is defined as an environment parameter, eg: [decimal]$env:_vLUF and regardless of the parameter value being negative, eg : -11.0.
Why Powershell when running a script without being as an administrator correctly interprets the minus sign in the argument and when run as an administrator it does not interpret the minus sign correctly is a question I leave to the experts!
However, my question was very well answered by Mr. #mklement0.
Your .ps1 script's parameter declaration is flawed:
Param(
[decimal]$env:_vLUF # !! WRONG - don't use $env:
)
See the bottom section for more information.
It should be:
Param(
[decimal] $_vLUF # OK - regular PowerShell variable
)
Parameters in PowerShell are declared as regular variables, not as environment variables ($env:).
(While environment variables can be passed as an argument (parameter value), an alternative is to simply reference them by name directly in the body of your script.)
Your PowerShell CLI call has problems too, namely with quoting.
Try the following instead:
powershell -NoProfile -ExecutionPolicy Bypass -Command "Start-Process -Verb RunAs powershell -ArgumentList '-NoProfile -ExecutionPolicy Bypass -File \"D:\z_Batchs e Scripts\Batchs\Normaliza_LUFS\ArqsNorms_LUFS_pass.ps1\" -_vLUF %_vLUF%'"
Specifically:
Embedded " chars. must be escaped as \" (sic) when using the Windows PowerShell CLI (powershell.exe); however, given that %_vLUF% represents a [decimal], you needn't quote it at all.
However, you appear to have hit a bug that affects PowerShell versions up to at least 7.2.4 (current as of this writing): if the argument starts with -, such as in negative number -11.0, the -File CLI parameter invariably interprets it as a parameter name - even quoting doesn't help. See GitHub issue #17519.
The workaround, as used above is to use a named argument, i.e. to precede the value with the target parameter name: -_vLUF %_vLUF%
As an aside: There's no reason to use & { ... } in order to invoke code passed to PowerShell's CLI via the -Command (-c) parameter - just use ... directly, as shown above. Older versions of the CLI documentation erroneously suggested that & { ... } is required, but this has since been corrected.
As for the broken attempt to use [decimal]$env:_vLUF as a parameter declaration:
Param(
[decimal]$env:_vLUF # !! EFFECTIVELY IGNORED
)
is effectively ignored.
However, if an environment variable _vLUF happens to be defined, it is accessible in the body of a script, independently of which parameters, if any, have been passed.
In direct invocation of your .ps1 script from your batch file, _vLUF indeed exists as an environment variable, because in cmd.exe (the interpreter of batch files), variables are invariably also environment variables - unlike in PowerShell.
That is, if %_vLUF% has a value in your batch file, a powershell child process you launch from it automatically sees it as $env:_vLUF
By contrast, if you launch an elevated process via Start-Process from such a PowerShell child process, that new, elevated process does not see the caller's environment variables - by security-minded design.
Note:
That PowerShell even syntactically accepts [decimal]$env:_vLUF as a parameter declaration should be considered a bug.
What happens is that a regular variable named env:_vLUF is indeed created and bound, if an argument passed to it, but on trying to get the value of that variable in the body of your script, it is preempted by the environment variable.
As such, an invocation can break, namely if the parameter is type-constrained and you pass a value that cannot be converted to that type ([decimal] in the case at hand).
If the invocation doesn't break, the type constraint is ignored: $env:_vLUF is invariably of type [string], as all environment variables are.

Why PowerShell.exe There is no way to dot source a script?

The help document says that the script will be executed in 'dot-sourced' mode, but it doesn't. why?
I read the full text of the help document and couldn't find the reason, so I came for help.
PS> PowerShell.exe -File '.\dot-source-test.ps1'
PS> $theValue
PS> . '.\dot-source-test.ps1'
PS> $theValue
theValue
PS>
The content of 'dot-source-test.ps1' is $theValue = 'theValue'.
If the value of File is a file path, the script runs in the local scope ("dot-sourced"), so that the functions and variables that the script creates are available in the current session.
about_PowerShell_exe - PowerShell | Microsoft Docs
To prevent conceptual confusion:
In order to dot-source a script, i.e. execute it directly in the caller's scope (as opposed to a child scope, which is the default), so that the script's variables, function definitions, ... are seen by the caller:
In the current PowerShell session, just use ., the dot-sourcing operator, directly:
# Dot-source in the caller's scope.
# When executed at the prompt in an interactive PowerShell session,
# the script's definitions become globally available.
. '.\dot-source-test.ps1'
Via powershell.exe, the Windows PowerShell CLI[1]:
Note: Whatever dot-sourcing you perform this way is limited to the child process in which powershell.exe runs and its PowerShell session; it has no impact on the caller's session.
Dot-sourcing via the CLI makes sense only in two scenarios:
Scenario A: You're passing commands via the (possibly positionally implied) -Command (-c) parameter that relies on definitions that must first be dot-sourced from a script file, and you want the session to exit automatically when the commands have finished executing.
Scenario B: You're entering a (possibly nested) interactive PowerShell session into which you want to dot-source (pre-load) definitions from a script file; as any interactive session, you will need to exit it manually, typically with exit.
Scenario A: Pre-load definitions, execute commands that rely on them, then exit:
The following starts a (new) PowerShell session as follows:
Script file .\dot-source-test.ps1 is dot-sourced, which defines variable $theValue in the caller's (here: the global) scope.
The value of $theValue is output.
The new session is automatically exited on completing the commands.
PS> powershell -c '. .\dot-source-test.ps1; $theValue'
theValue
Scenario B: Enter a (new) interactive session with pre-loaded definitions:
Simply add the -noexit switch in order to enter an interactive session in which script file .\dot-source-test.ps1 has been dot-sourced:
powershell -noexit -c '. .\dot-source-test.ps1'
# You're now in a (new) interactive session in which $theValue is defined,
# and which you must eventually exit manually.
Note:
If neither -File nor a command (via explicit or implied -Command / -c) are specified, -noexit is implied.
Because -c is needed here for dot-sourcing, -noexit must be specified to keep the session open.
While using -File for dot-sourcing instead - powershell -noexit -File '.\dot-source-test.ps1' - works too, I suggest avoiding it for conceptual reasons:
While it is technically true that a script passed to -File is dot-sourced in the new session, that is (a) unexpected, given that scripts executed from inside a session are not (they run in a child scope) and (b) by far the most typical use case for -File is to execute a given script and then exit - in which case the aspect of dot-sourcing is irrelevant.
As such, it is better to think of this behavior as an implementation detail, and it is unfortunate that the CLI help mentions it so prominently - causing the confusion that prompted this question.
[1] The same applies analogously to the PowerShell [Core] 7+ CLI, pwsh, except that it defaults to -File rather than -Command.
It's about the way the path to the file is passed through the command line. See the below example when used in Command Prompt. (test.ps1 contains the line $theValue = 'theValue')
Without using the "-File" toggle, it's treated differently, as an argument to be passed to the PowerShell process being triggered.
Seeing the same thing when calling in PowerShell.
The specific part you reference is under the " -File" toggle, which needs to be used.
If the value of File is "-", the command text is read from standard input. Running powershell -File - without redirected standard input starts a regular session. This is the same as not specifying the File parameter at all.
If the value of File is a file path, the script runs in the local scope ("dot-sourced"), so that the functions and variables that the script creates are available in the current session.
(source: Microsoft Docs > About PowerShell.exe)

Setting up a batch script to run powershell with arguments

I have a PowerShell script that takes an argument/command and executes well in PowerShell. I am looking at setting up a batch script to run following commands as is to mimic PowerShell
(-config is a function that's defined within script)
C\temp\power_t1.ps1 -config d:\temp\dirlist.txt
I wrote a batch script something like
ECHO OFF
POWERSHELL.EXE -file "C\temp\power_t1.ps1 -config d:\temp\dirlist.txt"
but this doesn't seem to be working.
Any suggestions?
Regards,
Ruben
The script file and the arguments for the script file are separate arguments for powershell.exe. If you look at powershell.exe /?, under -File, you see:
File must be the last parameter in the command, because all characters typed after the File parameter name are interpreted as the script file path followed by the script parameters.
Try:
POWERSHELL.EXE -File "C\temp\power_t1.ps1" -config d:\temp\dirlist.txt
Basically, treat everything after the -File parameter more or less as though you were writing it in PowerShell with the call operator (&).

What is the dash ("-") when used with pipe ("|") in CMD?

I wanted to create some clickable PowerShell scripts, and I found this answer that I modified slightly to be:
;#Findstr -bv ;#F %0 | powershell -noprofile -command - & goto:eof
# PowerShell Code goes here.
I understand Findstr is passing all lines that don't begin with ;#F to the right-hand side of the pipe and the dash specifies where the input should go, but what is the dash character called and where is it documented?
I found an explanation of CMD's pipe operator on Microsoft's Using command redirection operators, but it doesn't mention anything about the dash character.
I presume you mean the - that precedes the &. It has nothing to do with the pipe operator, it is a directive for powershell.
Here is a description of the -Command option excerpted from powershell help (accessed by powershell /?)
-Command
Executes the specified commands (and any parameters) as though they were
typed at the Windows PowerShell command prompt, and then exits, unless
NoExit is specified. The value of Command can be "-", a string. or a
script block.
If the value of Command is "-", the command text is read from standard
input.
BTW - I did not realize FINDSTR accepted - as an option indicator until I saw your question. I've only seen and used /. Good info to know.
The - is to Powershell saying accept the command(s) from stdin rather than from arguments. This is not a feature in cmd / batch and piping. It would work with < as well.
Powershell version 2 adds a "Run with Powershell" right-click context menu item to run scripts . Here you'll find some enhanced shell extensions to run Powershell scripts with elevated privileges. However if you just want to run a Powershell script by double clicking a file, I recommend just calling the Powershell script from a batch script instead of trying to embed Powershell code in the batch script. In the batch script use this: powershell.exe -file "%~dp0MyScript.ps1" where %~dp0 expands to the current directory. This essentially creates a bootstrapper for your Powershell script that you can double click to launch your Powershell script.

How to get Hudson CI to execute a Powershell script?

I'm using Hudson version 1.324 for CI and have a couple of issues:
Environment:
Windows Server 2008
Powershell v1.0
Hudson 1.324 running as a service
Hudson Powershell Plugin installed
Psake (aka. "Powershell Make/Rake" available from Github) 0.23
(All current/latest versions as of this initial post)
I have a Powershell (PS) script that works to compile, run NUnit tests, and if successful, create a 7z file of the output. The PS script works from the command line, on both my local development box as well as the CI server where Hudson is installed.
1) Execution Policy with Powershell.
I initially ran a PS console on the server, ran Set-ExecutionPolicy Unrestricted, which allows any script to be run. (Yes, I realize the security concerns here, I'm trying to get something to work and Unrestricted should remove the security issues so I can focus on other problems.)
[This worked, and allowed me to fire off the PS build script from Hudson yesterday. I then encountered another problem, but we'll discuss that more in item #2.]
Once Hudson could fire off a PS script, it complained with the following error:
"C:\Windows\system32\WindowsPowerShell\v1.0\powershell "&
'OzSystems.Tools\psake\psake.ps1' '.\oz-build.ps1'" The term
'OzSystems.Tools\psake\psake.ps1' is not recognized as a cmdlet, funct
ion, operable program, or script file. Verify the term and try again.
At line:1 char:2
+ & <<<< 'OzSystems.Tools\psake\psake.ps1' '.\oz-build.ps1'"
Using the same command line, I am able to successfully execute the PS script from the command line manually. However Hudson is unable to get PS to do the same. After looking at additional PS documentation I also tried this:
"& 'OzSystems.Tools\psake\psake.ps1' '.\oz-build.ps1'"
and got a similar error. There does not appear to be any documentation for the Powershell plugin for Hudson. I've gone through all the Powershell plugin files and don't see anything that's configurable. I can't find a log file for Hudson to get additional information.
Can anyone help me past this?
2) I spent yesterday wrestling with #1. I came in this AM and tried to dig in again, after restarting the Hudson server/service, and now it appears that the ExecutionPolicy has been reset to Restricted. I did what worked yesterday, opened a PS console and Set-ExecutionPolicy to Unrestricted. It shows Unrestricted in the PS console, but Hudson says that it doesn't have rights to execution PS scripts. I reopened a new PS console and confirmed that the ExecutionPolicy is still Unrestriced -- it is. But Hudson evidently is not aware of this change. Restarting Hudson service again does not change Hudson's view of the policy.
Does anyone know what's going on here?
Thanks, Derek
I just ran into the problem of running powershell scripts in hudson. The thing is that you are running a 32-bit process of Java, and you've configured Hudson for 64-bit but not for 32-bit. See the following thread we created at microsoft.
http://social.technet.microsoft.com/Forums/en/winserverpowershell/thread/a9c08f7e-c557-46eb-b8a6-a19ba457e26d
If your lazy.
1. Start powershell (x86) from the start menu as administrator
2. Set the execution policy to remotesigned
Run this once and your homefree.
When Running PowerShell from a scheduled task or Hudson you want to:
Specify the -ExecutionPolicy parameter (in your case: -Ex Unrestricted)
Specify that command using either -Command { ... } or -File NOT BOTH and not without specifying which you mean.
Try this (except that I don't recommend using relative paths):
PowerShell.exe -Ex Unrestricted -Command "C:\Path\To\OzSystems.Tools\psake\psake.ps1" ".\oz-build.ps1"
To be clear, this will work too:
PowerShell.exe -Ex Unrestricted -Command "&{&'OzSystems.Tools\psake\psake.ps1' '.\oz-build.ps1'}"
The first string after -Command is interpreted as THE NAME OF A COMMAND, and every parameter after that is just passed to that command as a parameter. The string is NOT a script, it's the name of a command (in this case, a script file)... you cannot put "&'OzSystems.Tools\psake\psake.ps1'" but you can put "OzSystems.Tools\psake\psake.ps1" even if it has spaces.
To quote from the help (run PowerShell -?) emphasis mine:
-Command
Executes the specified commands (and any parameters) as though they were
typed at the Windows PowerShell command prompt, and then exits, unless
NoExit is specified. The value of Command can be "-", a string. or a
script block.
If the value of Command is "-", the command text is read from standard
input.
If the value of Command is a script block, the script block must be enclosed
in braces ({}). You can specify a script block only when running PowerShell.exe
in Windows PowerShell. The results of the script block are returned
to the parent shell as deserialized XML objects, not live objects.
If the value of Command is a string, Command must be the last parameter
in the command , because any characters typed after the command are
interpreted as the command arguments.
I have been having the same problems as you (as you've seen from my comments). I have given up on the powershell launcher and moved to running things using the batch file launcher. Even though I had set the system to unrestricted that setting didn't seem to matter to hudson's launcher. I don't know if it runs in some other context or something, even adding things to the global profile.ps1 didn't seem to help. What I ended up doing was running
powershell " set-executionpolicy Unrestricted; & 'somefile.ps1'"
which does what I need, although it isn't ideal. I've e-mailed the plugin author about this and will update.
For question #1, try this (assuming you are using PowerShell 2.0):
"C:\Windows\system32\WindowsPowerShell\v1.0\powershell -executionPolicy Unrestricted -file OzSystems.Tools\psake\psake.ps1 C:\{path}\oz-build.ps1"
You are using "." for the path to oz-build.ps1. I suspect you will need to provide the full path to your oz-build.ps1 file to make this work. Unless the infrastructure that executes the command above happens to have the current dir set correctly. And even if it is set correctly for the "process", that only matters to .NET/Win32 API calls and not to PowerShell cmdlets. Current dir in PowerShell is tracked differently than the process's current dir because PowerShell can have multiple runspaces running simultaneously. That sort of global, mutable value doesn't work in this concurrent scenario.
As for question #2, what account does the Hudson service run under? Make sure that account has executed Set-ExecutionPolicy RemoteSigned (or unrestricted).
I just got through this exact problem. What a pain!
If you are running a 32-bit JVM on a 64-bit Windows, make sure that you set the execution policy for the 32-bit Powershell interface. I found my 32 bit executable here:
C:\Windows\syswow64\Windowspowershell\v1.0\powerhsell.exe
The 32- and 64-bit Powershell environments are completely distinct so setting the execution policy in one has no effect on the other.