Run command in powershell and ignore exit code in one line - powershell

I try to execute a command in powershell and ignore any non zeroexit code. Unfortunately I completely fail doing this :-(
Under Linux this is done with this trivial line:
command arg1 arg2 || echo "ignore failure"
The or clause is executed only in case of a failure and then the exit code of echo resets $?
I thought something like this would do the trick:
Invoke-Expression "command arg1 arg2" -ErrorAction Ignore
But $LASTEXITCODE is still set to a non zero value

PowerShell v7+'s pipeline-chain operators, && and ||, implicitly act on $LASTEXITCODE, but never reset it.
If you do want to reset it - which is generally not necessary - you can do the following:
command arg1 arg2 || & { "ignore failure"; $global:LASTEXITCODE = 0 }
Note that PowerShell scripts - unlike scripts for POSIX-compatible shells such as bash - do not implicitly exit with the exit code of the most recently executed command; instead, you must use exit $n explicitly, where $n is the desired exit code.
In the context of calling the PowerShell CLI from the outside, the above applies to using the -File parameter to call a script; for use with the -Command (-c) parameter, see the next section.
As for what you tried:
|| and && don't work in Windows PowerShell (versions up to v5.1) at all.
Invoke-Expression doesn't help here and should generally be avoided and used only as a last resort, due to its inherent security risks. In short: Avoid it, if possible, given that superior alternatives are usually available. If there truly is no alternative, only ever use it on input you either provided yourself or fully trust - see this answer.
If you're using the Windows PowerShell CLI with -Command (-c), and you need to make sure that the PowerShell process exits with exit code 0, do something like the following (... represents your command):
powershell.exe -noprofile -c "...; exit 0"
If you want to comment on the failure:
powershell.exe -noprofile -c "...; if ($LASTEXITCODE) { 'ignore failure' }; exit 0"
Note: In this case, ; exit 0 isn't strictly necessary, because the if statement alone, due to it succeeding, irrespective of the value of $LASTEXITCODE, is enough to make the exit code 0.
Also, note that PowerShell CLI sends all of PowerShell's output streams - including the error stream - to stdout by default, though you can selective redirect the error stream on demand with 2>.
This also applies to the PowerShell [Core] v7+ CLI, whose executable name is pwsh, and whose parameters are a superset of the Windows PowerShell CLI.
For more information on PowerShell with respect to process exit codes, see this answer.

Related

How to execute Powershell's "start-process -Verb RunAs" from inside a Batch where the elevated command inherits the Batch's environment?

1. Problem
I have a complicated batch file where some parts need to run with elevated/admin rights (e.g. interacting with Windows services) and I found a Powershell way to do that:
powershell.exe -command "try {$proc = start-process -wait -Verb runas -filepath '%~nx0' -ArgumentList '<arguments>'; exit $proc.ExitCode} catch {write-host $Error; exit -10}"
But there's a huge caveat! The elevated instance of my script (%~nx0) starts with a fresh copy of environment variables and everything I set "var=content" before is unavailable.
2. What I've tried so far
This Powershell script doesn't help either because Verb = "RunAs" requires UseShellExecute = $true which in turn is mutually exclusive to/with StartInfo.EnvironmentVariables.Add()
$p = New-Object System.Diagnostics.Process
$p.StartInfo.FileName = "cmd.exe";
$p.StartInfo.Arguments = '/k set blasfg'
$p.StartInfo.UseShellExecute = $true;
$p.StartInfo.Verb = "RunAs";
$p.StartInfo.EnvironmentVariables.Add("blasfg", "C:\\Temp")
$p.Start() | Out-Null
$p.WaitForExit()
exit $p.ExitCode
And even if that would work I'd still need to transfer dozens of variables...
3. unappealing semi-solutions
because circumventing the problem is no proper solution.
helper tools like hstart - because I can't relay on external tools. Only CMD, Powershell and maybe VBscript (but it looks like runas plus wait and errorlevel/ExitCode processing isn't possible with/in vbs).
passing (only required) variables as arguments - because I need dozens and escaping them is an ugly chore (both the result and doing it).
restarting the whole script - because it's inefficient with all the parsing, checking processing and other tasks happening again (and again and ...). I'd like to keep the elevated parts to a minimum and some actions can later be run as a normal user (e.g service start/stop).
Writing the environment to a file and rereading it in the elevated instance - because it's an ugly hack and I'd hope there's a cleaner option out there. And writing possibly sensitive information to a file is even worse than storing it temporarily in an environment variable.
Here's a proof of concept that uses the following approach:
Make the powershell call invoke another, aux. powershell instance as the elevated target process.
This allows the outer powershell instance to "bake" Set-Item statements that re-create the caller's environment variables (which the outer instance inherited, and which can therefore be enumerated with Get-ChilItem Env:) into the -command string passed to the aux. instance, followed by a re-invocation of the original batch file.
Caveat: This solution blindly recreates all environment variables defined in the caller's process in the elevated process - consider pre-filtering, possibly by name patterns, such as by a shared prefix; e.g., to limit variable re-creation to those whose names start with foo, replace Get-ChildItem Env: with Get-ChildItem Env:foo* in the command below.
#echo off & setlocal
:: Test if elevated.
net session 1>NUL 2>NUL && goto :ELEVATED
:: Set sample env. vars. to pass to the elevated re-invocation.
set foo1=bar
set "foo2=none done"
set foo3=3" of snow
:: " dummy comment to fix syntax highlighting
:: Helper variable to facilitate re-invocation.
set "thisBatchFilePath=%~f0"
:: Re-invoke with elevation, synchronously, reporting the exit
:: code of the elevated run.
:: Two sample arguments, ... and "quoted argument" are passed on re-invocation.
powershell -noprofile -command ^
trap { [Console]::Error.WriteLine($_); exit -667 } ^
exit ( ^
Start-Process -Wait -PassThru -Verb RunAs powershell ^
"\" -noprofile -command `\" $(Get-ChildItem Env: | ForEach-Object { 'Set-Item \\\"env:' + $_.Name + '\\\" \\\"' + $($_.Value -replace '\""', '`\\\""') + '\\\"; ' }) cmd /c '\`\"%thisBatchFilePath:'=''%\`\" ... \`\"quoted argument\`\" & exit'; exit `$LASTEXITCODE`\" \"" ^
).ExitCode
echo -- Elevated re-invocation exited with %ERRORLEVEL%.
:: End of non-elevated part.
exit /b
:ELEVATED
echo Now running elevated...
echo -- Arguments received:
echo [%*]
echo -- Env. vars. whose names start with "foo":
set foo
:: Determine the exit code to report.
set ec=5
echo -- Exiting with exit code %ec%...
:: Pause, so you can inspect the output before exiting.
pause
exit /b %ec%
Note:
trap { [Console]::Error.WriteLine($_); exit -667 } handles the case where the user declines the elevation prompt, which causes a statement-terminating error that the trap statement catches (using a try / catch statement around the Start-Process call is also an option, and usually the better choice, but in this case trap is syntactically easier).
Specifying pass-through arguments (arguments to pass directly to the re-invocation of the (elevated) batch file, after the cmd /c '\`\"%thisBatchFilePath:'=''%\`\" part above):
If arguments contain ', you must double them ('')
If arguments need double-quoting, you must enclose them in '\`\"...\`\" (sic), as shown with \`\"quoted argument\`\" above.
The cmd /c '<batch-file> & exit' re-invocation technique is required to ensure robust exit-code reporting, unfortunately - see this answer for details.
The explicit exit $LASTEXITCODE statement after the batch-file re-invocation is required to make the PowerShell CLI report the specific exit code reported by the batch file - without that, any nonzero exit code would be mapped to 1. See this answer for a comprehensive discussion of exit codes in PowerShell.
A piping hot solution
Derived from mklement0's working proof of concept and Jeroen Mostert's Base64 suggestion I've built a solution with this approach:
Pipe in data from inside the batch to the outer Powershell.
Let it convert the piped data into a Base64 string.
which is passed on into the command line of the elevated Powershell.
which in turn converts it back and pipes it into the new batch's instance.
It is more flexible because you're not limited to environment variables (you can essentially pass on anything (text based)) and the Powershell command doesn't need to be edited to choose what gets piped through. But it has a few limitations mklement0's implementation doesn't suffer from:
Environment variables containing newlines will not be passed on correctly and can cause chaos (depending on what comes after the LF, see barz).
Currently every line piped through (except for the first one) gets one whitespace prepended to it (so far I couldn't figure out how to fix that). It's usually not a problem and can be worked around (fooDoubleQouting is a negative example).
the elevated instance doesn't react to console input as usual any more (see notes).
Example / test batch:
#echo off & setlocal EnableDelayedExpansion
::# Test if elevated.
net session 1>NUL 2>NUL && goto :ELEVATED
(set LF=^
%=this line is empty=%
)
::# Set sample env. vars. to pass to the elevated instance.
set foo1=bar
set "foo2=none done"
set foo3=3" of snow
set "barz= Line1!LF! foo1=Surprise^! foo1 isn't '%foo1%' anymore. It was unintentionally overwritten."
set barts=1 " 3_consecutive_" "_before_EOL
set "barl=' sdfs' ยดยด`` =43::523; \/{[]} 457wb457;; %%%^!2^!11^!^!"
::# ' dummy comment#1 to fix syntax highlighting.
::# Helper variable to facilitate re-invocation (in case %~f0 contains any single quotes).
set "_selfBat=%~f0"
::# DDE - so "!" don't get expanded anymore. Was only needed for "set barz=..."
setlocal DisableDelayedExpansion
::# print variables etc. to console before self invocation & elevation.
call :testPrint
::# Generate pipe input. Be aware of CMD's handicaps of whats allowed in a command block.
::# eg. "REM" is not allowed and neither is echoing an unescaped closing parenthesis: ")" -> "^^^)"
(
echo[foo_Setting_one=extra-varval.
set ^^"
echo[bar_stuff=in between.^^^)^^^"
set bar
echo["fooDoubleQouting=testertest"
) | powershell.exe -nologo -noprofile -command ^
trap { [Console]::Error.WriteLine($_); exit -667 } ^
exit ( ^
Start-Process -PassThru -Wait -WindowStyle Maximized -Verb RunAs 'powershell.exe' ^
"\"-nol -nop -comm `\" $('Write-Output $([Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String(\\\"' + $([Convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes($(foreach ($i in $input) {\"$i`n\"})))) + '\\\")))') | cmd.exe '/D','/U','/T:4F','/S','/C',' \`\"%_selfBat:'=''%\`\" \`\"quoted argument\`\" nonQtdArg & exit'; exit `$LastExitCode `\" \"" ^
).exitCode
echo[
echo[ ---- Returned errorlevel is: %ERRORLEVEL%
pause
endlocal & endlocal & exit /b %ERRORLEVEL%
:testPrint
echo[
echo[ ---- WhiteSpaceTest: "%barts%"
set foo
set bar
echo[
set ^"
exit /B
::# " dummy comment#2 to fix syntax highlighting again.
:ELEVATED
setlocal DisableDelayedExpansion
::# Read and parse piped in data.
::# (with "delims" & "eol" truly defined as empty so every line is read as-is, even empty lines)
for /F delims^=^ eol^= %%A in ('findstr.exe "^"') do (
echo[ Parsing %%~A
for /F "tokens=1,* delims=="eol^= %%B in ("%%~A") do (
echo[ into "%%~B"
echo[ equals "%%~C"
::# Convert the piped in data back into environment variables+values.
set "%%~B=%%~C" 2>NUL
)
echo[
)
echo[-------- END PIPEREADING --------
echo[-- Arguments received:
echo[ [%*]
call :testPrint
set "ERR=42"
echo[
::# to actually pause and/or wait for / react to user input(!) one needs to pipe in CON (console).
<con set /P ERR=Enter arbitrary exitcode / errorlevel:
endlocal & exit /B %ERR%
Notes:
see mklement0's notes.
The CMD /C '<batch-file_withEscaped'> & exit' re-invocation technique isn't required if you consistently exit /b X in your batch file. Then &\`\"%_selfBat%\`\" instead of CMD /C ... & exit is enough (with separately separated arguments: 'arg1','arg2').
'/D','/T:4F', - Ignore CMD's registry AutoRun commands and set fore-/background colors to white on dark red.
echo[ instead of echo is safer and quicker (cmd doesn't need to search for actual executables named echo.*).
<con is required in the elevated instance for anything needing user interaction (eg. pause or set /P ...). Without <con the now empty(?) piped in standard input (pipe#0) still delivers nul(?) to anything asking for it (my assumption). I'm sure there is a way too rescue stdin from the pipe and reattach it to con (maybe some kinde of breakthrou from in here).
barl's backticks get mangled.
Escaping hell
Here are the dynamic middle and inner command lines to show whats going on and shave away some escaping magic:
powershell.exe -nol -nop -comm "Write-Output $([Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String(\"<<BASE64_BLOB>>\"))) | cmd.exe '/D','/U','/T:4F','/S','/C',' \"<<path\this.cmd_withEscaped'>>\" \"quoted argument\" nonQtdArg & exit'; exit $LastExitCode"
Even with just my default environment that command line is somewhere around 5kB big(!), thanks to the <<BASE64_BLOB>>.
cmd.exe /D /U /T:4F /S /C " "<<path\this.cmd_withNormal'>>" "quoted argument" nonQtdArg & exit"
So yeah, like has been said, the environment is not meant to be passed from a user to another, by design, because of the security implications. It doesn't mean that it can't be done, even if it's not something you're "supposed" to do. While I do think you should look into what do you actually want to achieve, I absolutely hate the type of answers where people tell you what you achsually "should" do and not answer the actual question at all. So I'm giving you both options here.
"Passing" the environment
You have several options here
From the elevated child process, read the environment variables from the unelevated caller parent process' memory using the NtQueryInformationProcess and ReadProcessMemory APIs.
Then either overwrite the variables on the target process (in your case, the current process) with WriteProcessMemory or just set them as you normally would. You can achieve this with only Powershell, albeit you need to add some C# code to call the required API functions.
Here in my Enable-Privilege.ps1 example you can see how to implement NtQueryInformationProcess in PowerShell and manipulate a parent process (I wrote it for the purpose of modifying privilege tokens inside self/parent/any process). You would actually want to use a part of it because you need to enable SeDebugPrivilege to be able to manipulate memory of other processes.
This option would probably be most "clean" and robust solution without any instantly obvious caveats. See this codeproject article for more information: Read Environment Strings of Remote Process
Inside the unelevated parent process, iterate through all the environment variables and write them as a string to a single environment variable. Then pass that single environment value as an argument when spawning the elevated child process, where you can parse that string and write those environment values back. You would likely run to the same caveats as option 3 here though.
Pipe the variables from the parent to the child, like has been proposed here already. The problem here is that the batch processor is really finicky and the rules of parsing and escaping are super janky, so it's very likely you would run to issues with special characters and other similar caveats with this option.
Using a kernel-mode driver, overwrite the security token of the unelevated process with a elevated one, writing back the original token after you are done. On the surface this would seem like the perfect solution, since you could actually stay inside the previously-unelevated process and retain it's environment without changing context, only the security context would be replaced. As in kernel-mode you can modify everything, the security tokens are simple memory structs inside kernel memory which you can change. The problem with this approach is that it completely bypasses the windows security model, as it's supposed to be impossible to change the security token of an existing process. Because it's supposed to be "impossible", it goes deep into undocumented territory and inside the kernel you can easily break stuff if you don't know what you're doing, so this is definitely the "advanced" option (even though this particular thing is not too complicated, it's basically just writing some memory). As it's something you're not supposed to be doing, there is a possibility it breaks something since Windows does not expect a process to suddenly have a different security context. That being said, I've used this approach with no problems in the past. It could be broken in the future though by any change in the security design. You would also need to either enable testsigning (aka Disable Driver Signature Enforcement), digitally sign your driver or use some other method to bypass this requirement (f.ex. through a hypervisor or an exploit), but that is out of the scope of this answer.
The achsually version
"because circumventing the problem is no proper solution."
In this case, I would do exactly that. Since your problem is of such nature that a easy solution for it doesn't exist, because it's not supported by design. It's hard to propose a specific solution since the lack of information of what it is you're actually trying to achieve here.
I'm gonna try to cover this in a general sense. First is to think about the what it is you're actually trying to achieve here part. What are the operations you need to do which require elevation? There would be multiple ways to achieve whatever it is in a supported fashion.
Examples:
For whatever you need to read/write/modify, you could change the security security settings of the target (instead of the source). Meaning that let's say you need to access a specific registry key, service, file, folder, whatever, you could simply modify the ACL of the target to allow the source (i.e. the user) to do whatever operation you need. If you need to modify a single service for example, you could add the start/stop/modify right for only that single process.
If the thing you need is specific to the types of operations rather than specific targets, you could add the required privileges to the "Users" group. Or make a new group with the required privileges, and then add the user to that group.
If you want more granular control on what can/can't be done and/or the operations are specific, you could write a simple program and run it as a elevated service. Then you could just tell that service to do the required operations from the unelevated batch script, so no requesting elevation and spawning new process would be needed. You could simply do my-service.exe do-the-thing from batch, and that my-service would do the operation you need.
You could also always ask for the elevation in beginning of the script, but as it's clear you don't want to do this with full administrator rights, you could create a new user for just this purpose which you add to a new group for it which has the required privileges you need. Note that without resorting to the aforementioned kernel-mode ""hacks"", you cannot add new privileges for a user on-the-fly, only enable/disable/remove existing ones. What you can do though is add them beforehand based on what you need, but that will need to happen before the process is started.

Call a batch file from PowerShell with proper exit code reporting while avoiding doubling of carets

I suspect there is no good solution, but perhaps I'm overlooking something:
What I'm after is a way to:
(a) call a batch file from PowerShell in a way that robustly reflects its - implicit or explicit - exit code in PowerShell's automatic $LASTEXITCODE variable.
Notably, calling a batch file that exits with, say, whoami -nosuch || exit /b, should result in $LASTEXITCODE reflecting whoami's exit code, i.e. 1. This is not the case when you invoke a batch file (by name or path) from PowerShell: the exit code is 0 (by contrast, inside a cmd.exe session %ERRORLEVEL% is set to 1).
Also note that the invocation should remain integrated with PowerShell's output streams, so I am not looking for solutions based on System.Diagnostics.Process.
Furthermore, I have no knowledge of or control over the batch files getting invoked - I'm looking for a generic solution.
(b) without double-quoted arguments passed to the batch file getting altered in any way, and without cmd.exe's behavior getting modified in any way; notably:
^ characters should not be doubled (see below).
Enabling delayed expansion with /V:ON is not an option.
The only way I know how to solve (a) is to invoke the batch file via cmd /c call.
Unfortunately, this violates requirement (b), because the use of call seemingly invariably doubles ^ characters in arguments. (And, conversely, not using call then doesn't report the exit code reliably).
Is there a way to satisfy both requirements?
Note that PowerShell is only the messenger here: The problem lies with cmd.exe, and anyone calling a batch file from outside a cmd.exe session is faced with the same problem.
Example (PowerShell code):
# Create a (temporary) batch file that echoes its arguments,
# provokes an error, and exits with `exit /b` *without an explicit argument*.
'#echo off & echo [%*] & whoami -nosuch 2>NUL || exit /b' | Set-Content test.cmd
# Invoke the batch file and report the exit code.
.\test.cmd "a ^ 2"; $LASTEXITCODE
The output should be:
["a ^ 2"]
1
However, in reality the exit code is not reported:
["a ^ 2"]
0 # !! BROKEN
If I call with cmd /c call .\test.cmd instead, the exit code is correct, but the ^ characters are doubled:
PS> cmd /c call .\test.cmd "a ^ 2"; $LASTEXITCODE
["a ^^ 2"] # !! BROKEN
1 # OK
I've no idea why this works, but it does:
cmd /c '.\test.cmd "a ^ 2" & exit'
$LASTEXITCODE
Output:
["a ^ 2"]
1
Kudos to beatcracker for finding an effective workaround in his answer; let me add some background information and guidance:
First, to be clear, no workaround should be necessary; cmd.exe's behavior is clearly a bug.
cmd /c '.\test.cmd "a ^ 2" || exit' - i.e. || rather than & - is what one would expect to be an effective workaround too. The fact that only &, which unconditionally sequences commands, works, indicates that even cmd.exe-internally the failure status of the batch file isn't yet known as part of the same statement - only afterwards - which appears to be another manifestation of the bug.
Why an explicit exit call following the batch-file call as part of the same statement does relay the batch file's (zero or nonzero) exit code correctly is anyone's guess, but it seems to work.
Fortunately, the workaround is also effective for solving related exit-code problems in batch files that do not contain explicit exit /b / exit calls - see this answer.
Syntax considerations:
From PowerShell, the alternative to passing a single command-string is to pass individual arguments and escape the & character as `& (using `, the "backtick", PowerShell's escape character) so as to prevent PowerShell from interpreting it (quoting it as '&' would work too):
cmd /c .\test.cmd "a ^ 2" `& exit
From an environment that doesn't involve a shell, such as when launching from Task Scheduler, the `-escaping of & is not needed (and mustn't be used).
Not having to enclose the entire for-cmd.exe command in quotes makes it easier to pass arguments that (a) individually require double quotes and (b) involve references to PowerShell variables and/or expressions, given that the latter requires use of "..." rather than '...':
# Passing *individual* arguments makes double-quoting easier.
PS> cmd /c .\test.cmd "Version = $($PSVersionTable.PSVersion)" `& exit; $LASTEXITCODE
["Version = 7.2.0-preview.4"]
1
Using quoting of the entire for-cmd.exe command would be awkward in this case, due to the need to escape the argument-specific " chars.:
# Embedded double quotes must now be `-escaped.
PS> cmd /c ".\test.cmd `"Version = $($PSVersionTable.PSVersion)`" & exit"
["Version = 7.2.0-preview.4"]
1
The Native module (authored by me; install it from the PowerShell Gallery with Install-Module Native) comes with function ie, which:
automatically applies the above workaround.
generally compensates for problems arising from PowerShell's broken argument-passing to external programs (see this answer).
# After having run Install-Module Native:
# Use of function `ie` applies the workaround behind the scenes.
PS> ie .\test.cmd "Version = $($PSVersionTable.PSVersion)"; $LASTEXITCODE
["Version = 7.2.0-preview.4"]
1
The hope is that what function ie does will become a part of PowerShell itself, as part of the upcoming (in PowerShell v7.2) PSNativeCommandArgumentPassing experimental feature that is intended as an opt-in fix to the broken argument-passing - see GitHub issue #15143

How to return non zero exit code from a Powershell module function without closing the powershell console?

My powershell module has a function and I want it to return a non zero exit code. However, being a module function it is loaded into the context of the powershell console when I run Import-Module. So, when the function executes exit 1 - poof, goes the console window!
At least, this is how I explain it closing the powershell window when it exits.
So, how can a ps module function exit with a non zero exit code without killing the console where the module was imported?
P.S.
I did notice several questions on SO about this subject, but none seems to examine this particular case.
EDIT 1
I would like to provide some context. I have a PS module with a lot of functions. Some of them are used as is in Azure DevOps yaml build scripts. The latter knows to recognize non zero exit code and abort the pipeline, so it is not necessary to throw from a function to abort the flow.
However, if I want to call that function from the console, e.g. to test something quickly and it exits with non zero code, the whole console window is closed. This is extremely annoying.
Sometimes there is a workaround. So, instead of this code:
dotnet build ...
$ExitCode = $LastExitCode
DoSomethingInAnyCase()
if ($ExitCode)
{
exit $ExitCode
}
We can have the following version:
try
{
dotnet build ...
}
finally
{
DoSomethingInAnyCase()
}
Both versions would correctly return the right exit code, but because the second one does not have the explicit exit statement, it does not close the console.
You'll have to set $global:LASTEXITCODE in order to set the exit code, but note that PowerShell functions aren't really meant to set exit codes, only scripts, and the latter only for reporting exit codes to the outside world, via the PowerShell process' own exit code, when PowerShell is called via its (CLI powershell.exe for Windows PowerShell, pwsh for PowerShell Core) from a build tool, scheduled task, or another shell, for instance.
Also note that setting $global:LASTEXITCODE directly:
does not make $?, the automatic success-status variable, reflect $false in the caller's context, the way that exit <nonzero-value> does from a script and the way that calling an external program that reports a nonzero exit code does.
is not enough to make the PowerShell process as a whole report this exit code.
In short: All this gains you is that the caller can inspect $LASTEXITCODE after your function was called, as you would after calling an external program.
Generally, exit codes are an inter-process concept and do not fit well into PowerShell's in-process world.
For more information about exit codes in PowerShell, see this post.
PowerShell's analog to exit codes is $?, the automatic, Boolean success-status variable ($true or $false), which reflects a PowerShell command's success immediately afterwards.
(If that command is an external-program call, an exit code of 0 sets $? to $true, and any nonzero one sets it to $false).
As it turns out, setting that was what you really meant to ask.
As of PowerShell Core 7.0.0-preview.5, you cannot set $? directly.
For now, these are the only ways to cause $? to reflect $false in the caller's scope, and, conversely, you cannot always ensure that it is $true:
From a script: Exit the script with exit <nonzero-integer>
From a cmdlet or function (script as well):
Throw a script-terminating error (Throw) or a statement-terminating error ($PSCmdlet.ThrowTerminatingError()) - the latter being only available in advanced functions and scripts.
Write an error to PowerShell's error stream with $PSCmdlet.WriteError() - the latter being only available in advanced functions and scripts.
Note that this unexpectedly currently does not apply to the Write-Error cmdlet - see this GitHub issue
Note that both techniques invariably involve emitting an error.
Since it sounds like that's precisely what you're trying to avoid, you'll have to wait until the ability to set $? directly is implemented.
The decision to implement this ability has been made, but it's unclear when it will be implemented.
Workaround: Run cmd.exe and set whatever exit code you want before exiting the function. Example:
function Test-Function {
$cmd = Join-Path ([Environment]::GetFolderPath([Environment+SpecialFolder]::System)) "cmd.exe"
& $cmd /c exit 3
# $LASTEXITCODE will be set to 3
}

PowerShell and process exit codes

This self-answered question tries to address two distinct aspects of dealing with process exit codes in PowerShell:
In PowerShell code, how can you query the exit code set by an external process (a call to an external program), and how do such exit codes integrate with PowerShell's error handling?
When someone else calls PowerShell via its CLI, pwsh (PowerShell Core) / powershell.exe (Windows PowerShell), what determines the PowerShell process' exit code that communicates success vs. failure to the calling process (which could be a build / CI / automation-server task, a scheduled task, or a different shell, for instance).
Current as of PowerShell [Core] 7.2.1
PowerShell-internal use of exit codes:
PowerShell-internally, where native PowerShell commands generally run in-process, exit codes from child processes that run external programs play a very limited role:
Native PowerShell commands generally don't set exit codes and don't act on them.
PowerShell has an abstract counterpart to exit codes: $?, the automatic, Boolean success-status variable:
It reflects whether the most recently executed command had any errors, but in practice it is rarely used, not least because - up to version 6.x - something as seemingly inconsequential as enclosing a command in (...) resets $? to $true - see GitHub issue #3359 - and because using Write-Error in user functions doesn't set $? to $false - see GitHub issue #3629; however, eventually providing the ability for user code to set $? explicitly has been green-lit for a future version.
While $? also reflects (immediately afterwards) whether an external program reported an exit code of 0 (signaling success, making $? report $true) or a nonzero exit code (typically signaling failure, making $? $false), it is the automatic $LASTEXICODE variable that contains the specific exit code as an integer, and that value is retained until another external program, if any, is called in the same session.
Caveat: Due to cmd.exe's quirks, a batch file's exit code isn't reliably reported, but you can work around that with cmd /c <batch-file> ... `& exit - see this answer; GitHub issue #15143 additionally suggests building this workaround into PowerShell itself.
Also, up to v7.1, $? can report false negatives if the external program reports exit code 0 while also producing stderr output and there is also a PowerShell redirection involving 2> or *> - see this answer and GitHub issue #3996; as of PowerShell PowerShell Core 7.2.0-preview.4; the corrected behavior is a available as experimental feature PSNotApplyErrorActionToStderr.
Finally, it's best to treat $LASTEXITCODE as read-only and let only PowerShell itself set it. (Technically, the variable is writeable and lives in the global scope, so if you do want to modify manually, after all, be sure to assign to $global:LASTEXITCODE, so as not accidentally create a transient local copy that has no effect.)
Unlike terminating errors or non-terminating errors reported by PowerShell-native commands, nonzero exit codes from external programs can not be automatically acted upon by the $ErrorActionPreference preference variable; that is, you cannot use that variable to silence stderr output from external programs nor can you, more importantly, choose to abort a script via value 'Stop' when an external program reports a nonzero exit code.
Better integration of external programs into PowerShell's error handling is being proposed in RFC #277.
How to control what PowerShell reports as its exit code when it is called from the outside:
Setting an exit code that at least communicates success (0) vs. failure (nonzero, typically) is an important mechanism for letting outside callers know whether your PowerShell code succeeded overall or not, such as when being called from a scheduled task or from an automation server such as Jenkins via the PowerShell CLI (command-line interface) - pwsh for PowerShell [Core] vs. powershell.exe for Windows PowerShell.
The CLI offers two ways to execute PowerShell code, and you can use exit <n> to set an exit code, where <n> is the desired exit code:
-File <script> [args...] expects the path of a script file (*.ps1) to execute, optionally followed by arguments.
Executing exit <n> directly inside such a script file (not inside another script that you call from that script) makes the PowerShell process report its exit code as <n>.
If a given script file exits implicitly or with just exit (without an exit-code argument), exit code 0 is reported.
-Command <powershell-code> expects a string containing one or more PowerShell commands.
To be safe, use exit <n> as a direct part of that command string - typically, as the last statement.
If your code is called from tools that check success by exit code, make sure that all code paths explicitly use exit <n> to terminate.
Caveat: If the PowerShell process terminates due to an unhandled script-terminating error - irrespective of whether the CLI was invoked with -File or -Command - the exit code is always 1.
A script-terminating (fatal) error is either generated from PowerShell code with the throw statement or by escalating a less a severe native PowerShell error with -ErrorAction Stop or $ErrorActionPreference = 'Stop', or by pressing Ctrl-C to forcefully terminate a script.
If exit code 1 isn't specific enough (it usually is, because typically only success vs. failure needs to be communicated), you can wrap your code in a try / catch statement and use exit <n> from the catch block.
The exact rules for how PowerShell sets its process exit code are complex; find a summary below.
How PowerShell sets its process exit code:
If an unhandled script-terminating error occurs, the exit code is always 1.
With -File, executing a script file (*.ps1):
If the script directly executes exit <n>, <n> becomes the exit code (such statements in nested calls are not effective).
Otherwise, it is 0, even if non-terminating or statement-terminating errors occurred during script execution.
With -Command, executing a command string containing one or more statements:
If an exit <n> statement is executed directly as one of the statements passed in the command string (typically, the last statement), <n> becomes the exit code.
Otherwise, it is the success status of the last statement executed, as implied by $?, that determines the exit code:
If $? is:
$true -> exit code 0
$false -> exit code 1 - even in the case where the last executed statement was an external program that reported a different nonzero exit code.
Given that the last statement in your command string may not be the one whose success vs. failure you want to signal, use exit <n> explicitly to reliably control the exit code, which also allows you to report specific nonzero exit codes.
For instance, to faithfully relay the exit code reported by an external program, append ; exit $LASTEXITCODE to the string you pass to -Command.
Inconsistencies and pitfalls as of PowerShell 7.0:
Arguably, -Command (-c) should report the specific exit code of the last statement - provided it has one - instead of the abstract 0 vs. 1. For instance, pwsh -c 'findstr'; $LASTEXITCODE should report 2, findstr.exe's specific exit code, instead of the abstract 1 - see GitHub issue #13501.
Exit-code reporting with *.ps1 files / the -File CLI parameter:
It is only an explicit exit <n> statement that meaningfully sets an exit code; instead, it should again be the last statement executed in the script that determines the exit code (which, of course, could be an exit statement), as is the case in POSIX-compatible shells and with -Command, albeit in the suboptimal manner discussed.
When you call a *.ps1 script via -File or as the last statement via -Command, PowerShell's exit code in the absence of the script exiting via an exit statement is always 0 (except in the exceptional Ctrl-C / throw cases, where it becomes 1).
By contrast, when called in-session, again in the absence of exit, $LASTEXICODE reflects the exit code of whatever external program (or other *.ps1 if it set an exit code) was executed last - whether executed inside the script or even before.
In other words:
With -File, unlike with -Command, the exit code is categorically set to 0 in the absence of an exit statement (barring abnormal termination).
In-session, the exit code (as reflected in $LASTEXITCODE) is not set at all for the script as a whole in the absence of an exit statement.
See GitHub issue #11712.

Equivalent of bash "set -o errexit" for windows cmd.exe batch file?

What's the Windows batch file equivalent of set -o errexit in a bash script?
I have a long batch file filled with different programs to run on Windows command line... basically its an unrolled make file with every compiler command that needs to be run to build an exe in a long sequence of commands.
The problem with this method is that I want it to exit the batch command on the first non-zero return code generate by a command in the script.
As far as I know, Windows batch files have a problem where they don't automatically exit on the first error without adding a lot of repetitive boilerplate code between each command to check for a non-zero return code and to exit the script.
What I'm wondering about, is there an option similar to bash's set -o errexit for Windows cmd.exe? or perhaps a technique that works to eliminate too much boilerplate error checking code... like you set it up once and then it automatically exits if a command returns a non-zero return code without adding a bunch of junk to your script to do this for you.
(I would accept PowerShell option as well instead of cmd.exe, except PowerShell isn't very nice with old-unix-style command flags like: -dontbreak -y ... breaking those commands without adding junk to your command line like quotes or escape characters... not really something I want to mess around with either...)
CMD/Batch
As Ken mentioned in the comments, CMD does not have an equivalent to the bash option -e (or the equivalent -o errexit). You'd have to check the exit status of each command, which is stored in the variable %errorlevel% (equivalent to $? in bash). Something like
if %errorlevel% neq 0 then exit /b %errorlevel%
PowerShell
PowerShell already automatically terminates script execution on errors in most cases. However, there are two error classes in PowerShell: terminating and non-terminating. The latter just displays an error without terminating script execution. The behavior can be controlled via the variable $ErrorActionPreference:
$ErrorActionPreference = 'Stop': terminate on all errors (terminating and non-terminating)
$ErrorActionPreference = 'Continue' (default): terminate on terminating errors, continue on non-terminating errors
$ErrorActionPreference = 'SilentlyContinue': don't terminate on any error
PowerShell also allows more fine-grained error handling via try/catch statements:
try {
# run command here
} catch [System.SomeException] {
# handle exception of a specific type
} catch [System.OtherException] {
# handle exception of a different type
} catch {
# handle all other exceptions
} finally {
# cleanup statements that are run regardless of whether or not
# an exception was thrown
}