As part of a project, I need to run two bat files with a PowerShell script. These bat files will perform several operations including the creation of environment variables.
But problem. The first environment variables are created (those created with the first bat file) but not the second ones (which must be created with the execution of the second bat file). The execution of the second one went "well" because the rest of the operations it has to perform are well done.
I use the same function for the execution of these .bat files.
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
$argList = "/c '"$batFile'""
$process = Start-Process "cmd.exe" -ArgumentList $argList -Wait -PassThru -Verb runAd
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
I use the line
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
to reload the environment variables. But that didn't solve the problem. Without this line I have the environment variables corresponding to the second bat file (the one running second) being created. With it, I have only those of the first one.
I also noticed that if I re-run my PowerShell program (so re-run the batch files), I had all the environment variables created.
Well 1st off:
It looks like you have an error in your code for executing your 2nd batch file so I suspect you re-wrote your code to go here and it's not at all in the original form, as how it's written you would never get anything further.
You know what TL;DR: Try this
I've been writing a lot, and its a rabbit hole considering the snippet of code isn't enough of the process, the code is obviously re-written as it introduces a clearly different bug, and your description leaves something to be desired.
i'll leave some of the other points below re-ordered, and you can feel free to read/ignore whatever.
But here, is the long and short of it.
if you need to run this CMD scripts, and get some stuff out of them to ad to path, have them run normally and echo the path they create into stdout, then capture it in a powershell variable, dedupe it in powershell and set the path directly for your existing powershell environment.
Amend both of your CMD Scripts AKA Batch Files to add this to the very top before any existing lines.
#(SETLOCAL
ECHO OFF )
CALL :ORIGINAL_SCRIPT_STARTS_HERE >NUL 2>NUL
ECHO=%PATH%
( ENDLOCAL
EXIT /B )
:ORIGINAL_SCRIPT_STARTS_HERE
REM All your original script lines should be present below this line!!
PowerShell code basically will be
$batfile_1 = "C:\Admin\SE\Batfile_1.cmd"
$batfile_2 = "C:\Admin\SE\Batfile_2.cmd"
$Path_New = $($env:path) -split ";" | select -unique
$Path_New += $(&$batFile_1) -split ";" | ? {$_ -notin $Path_New}
$Path_New += $(&$batFile_2) -split ";" | ? {$_ -notin $Path_New}
$env:path = $($Path_New | select -unique) -join ";"
Although if you don't need the separate testable steps you could make it more concise as:
$batfile_1 = "C:\Admin\SE\Batfile_1.cmd"
$batfile_2 = "C:\Admin\SE\Batfile_2.cmd"
$env:path = $(
$( $env:path
&$batFile_1
&$batFile_2
) -split ";" | select -unique
) -join ";"
Okay leaving the mostly done stuff where I quit-off trying to amend my points as I followed the rabbit hole tracking pieces around, as it will give some light on some aspects here
2nd off: You do not need to start-process to run a CMD script, you can run a cmd script natively it will automatically instantiate a cmd instance.
Start-Process will spawn it as a separate process sure, but you wait for it, and although you use -PassThru and are saving that as a variable, you don't do anything with it to try to check it's status or error code so you may as well just run the CMD script directly and see it's StdOut in your powershell window, or save it in a variable to log it if needed.
3rd off: Why not just set the environment variables directly using Powershell?
I'm guessing these scripts do other stuff but might be that you should just echo what they want to set Path to back to the PowerShell script and then dedupe it and set the path when done.
4th off: $env:Path is your current environment's path, this includes all pathtext that is from the System AND the currentuser profile (HKLM: and HKCU: registry keys for environment), while $( [System.Environment]::GetEnvironmentVariable("Path","Machine") ) is your System (Local machine) pat has derived from your registry.
5th off: The operational Environment is specific to each shell, when you start a new instance of CMD /c it inherits the environment of the previous cmd instance that spawned it.
6th off: Changes made to environmental variables do not 'persist' ie: you can't open a new CMD / Powershell instance and see them, and once you close that original cmd window they're gone, unless you edit the registry values of these items directoy or use SET X in a cmd session (Which is problematic and should be avoided!) and also ONLY affects the USER variables not the system/local machine variables.
Thus, any changes made to the environment in one CMD instance only operate within that existing shell unless they are changes that persist in the registry, in which case they will only affect new shells that are launched.
7th off: When you launch powershell, it is a cmd shell running powershell, and so powershell inherits whatever the local machine and current user's variables are at that moment when the interpreter is started. This will be what is in $env:xxx
8th off: Setting $env:Path = $([System.Environment]::GetEnvironmentVariable("Path","Machine")) will always change the current powershell environment to whatever is stored in the [HKLM:\] registry key for environmental variables.
Now given that we don't have all of your code and only a description of what is happening.
It appears you have one script Lets call it batFile_1.cmd that is setting some variables in your
For each batch file you run whether spawned implicitly or explicitly you will inherit the previous shell's command environment.
However
Each instance of CMD which you spawn with your batch files within them, spawns a separate cmd shell instance and he Instance of CMD that powershell.exe is running inside of, and thus your script was running in.
Now I'm just supposing what is happening since you only give a small snippet, which is not enough to actually reproduce your real issue.
But it seems like you spawn a cmd script,
So it's hard to know exactly is happening without the full context instead of the snippet, although I'll go into one scenario that might be happening below.
A note on Each CMD instance only inherits the values of it's parent.
(I feel this is much clearer to explain in opening an actual CMD window and test how the shell works by spawning another instance of CMD.
When the new instance is exited the variables revert to the previous instance because they only move from parent to child)
eg:
C:\WINDOWS\system32>set "a=hello"
C:\WINDOWS\system32>echo=%a%
hello
C:\WINDOWS\system32>CMD /V
Microsoft Windows [Version 10.0.18362.1082]
(c) 2019 Microsoft Corporation. All rights reserved.
C:\WINDOWS\system32>(SET "a= there!"
More? SET "b=%a%!a!"
More? )
C:\WINDOWS\system32>echo=%b%
hello there!
C:\WINDOWS\system32>echo=%a%
there!
C:\WINDOWS\system32>exit
C:\WINDOWS\system32>echo=%a%
hello
C:\WINDOWS\system32>echo=%b%
%b%
C:\WINDOWS\system32>
But that isn't really what is happening here you seem to be updating the command environment back to the Local machine
Related
I wrote a script to build all .net projects in a folder.
Issue
The issue is I am getting a missing function error when I call Build-Sollution.
What I tried
I made sure that function was declared before I used it so I am not really sure why it saids that it is not defined.
I am new to powershell but I would think a function calling another functions should work like this?
Thanks in advance!
Please see below for the error message and code.
Error Message
Line |
3 | Build-Sollution $_
| ~~~~~~~~~~~~~~~
The term 'Build-Sollution' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Build-Sollution:
Code
param (
#[Parameter(Mandatory=$true)][string]$plugin_path,
[string]$depth = 5
)
$plugin_path = 'path/to/sollutions/'
function Get-Sollutions {
Get-ChildItem -File -Path $plugin_path -Include *.sln -Recurse
}
function Build-Sollution($solution) {
dotnet build $solution.fullname
}
function Build-Sollutions($solutions) {
$solutions | ForEach-Object -Parallel {
Build-Sollution $_
}
}
$solutions_temp = Get-Sollutions
Build-Sollutions $solutions_temp
From PowerShell ForEach-Object Parallel Feature | PowerShell
Script blocks run in a context called a PowerShell runspace. The runspace context contains all of the defined variables, functions and loaded modules.
...
And each runspace must load whatever module is needed and have any variable be explicitly passed in from the calling script.
So in this case, the easiest solution is to define Build-Sollution inside Build-Sollutions
As for this...
I am new to powershell but I would think a function calling another
functions should work like this?
... you cannot use the functions until you load your code into memory. You need to run the code before the functions are available.
If you are in the ISE or VSCode, if the script is not saved, Select All and hit use the key to run. In the ISE use F8 Selected, F5 run all. In VSCode, F8 run selected, crtl+F5 run all. YOu can just click the menu options as well.
If you are doing this from the consolehost, the run the script using dot sourcing.
. .\UncToYourScript.ps1
It's ok to be new, we all started somewhere, but it's vital that you get ramped up first. so, beyond what I address here, be sure to spend time on Youtube and search for Beginning, Intermediate, Advanced PowerShell for videos to consume. There are tons of free training resources all over the web and using the built-in help files would have given you the answer as well.
about_Scripts
SCRIPT SCOPE AND DOT SOURCING Each script runs in its own scope. The
functions, variables, aliases, and drives that are created in the
script exist only in the script scope. You cannot access these items
or their values in the scope in which the script runs.
To run a script in a different scope, you can specify a scope, such as
Global or Local, or you can dot source the script.
The dot sourcing feature lets you run a script in the current scope
instead of in the script scope. When you run a script that is dot
sourced, the commands in the script run as though you had typed them
at the command prompt. The functions, variables, aliases, and drives
that the script creates are created in the scope in which you are
working. After the script runs, you can use the created items and
access their values in your session.
To dot source a script, type a dot (.) and a space before the script
path.
See also:
'powershell .net projects build run scripts'
'powershell build all .net projects in a folder'
Simple build script using Power Shell
Update
As per your comments below:
Sure the script should be saved, using whatever editor you choose.
The ISE does not use PSv7 by design, it uses WPSv5x and earlier.
The editor for PSv7 is VSCode. If you run a function that contains another function, you have explicitly loaded everything in that call, and as such it's available.
However, you are saying, you are using PSv7, so, you need to run your code in the PSv7 consolehost or VSCode, not the ISE.
Windows PowerShell (powershell.exe and powershell_ise.exe) and PowerShell Core (pwsh.exe) are two different environments, with two different executables, designed to run side-by-side on Windows, but you do have to explicitly choose which to use or write your code to branch to a code segment to execute relative to the host you started.
For example, let's say I wanted to run a console command and I am in the ISE, but I need to run that in Pwsh. I use a function like this that I have in a custom module autoloaded via my PowerShell profiles:
# Call code by console executable
Function Start-ConsoleCommand
{
[CmdletBinding(SupportsShouldProcess)]
[Alias('scc')]
Param
(
[string]$ConsoleCommand,
[switch]$PoSHCore
)
If ($PoSHCore)
{Start-Process pwsh -ArgumentList "-NoExit","-Command &{ $ConsoleCommand }" -PassThru -Wait}
Else {Start-Process powershell -ArgumentList "-NoExit","-Command &{ $ConsoleCommand }" -PassThru -Wait}
}
All this code is doing is taking whatever command I send it and if I use the PoSHCore switch...
scc -ConsoleCommand 'SomeCommand' -PoSHCore
... it will shell out to PSCore, run the code, otherwise, it just runs from the ISE>
If you want to use the ISE with PSv7 adn not do the shell out thing, you need to force the ISE to use PSv7 to run code. See:
Using PowerShell Core 6 and 7 in the Windows PowerShell ISE
I am trying to run a PowerShell script Daily.ps1 on start-up, however, due to administrator settings (I cannot run as admin, that is not an option), I cannot run it through the Task Scheduler. For example, this is the contents of Daily.ps1:
if (1 -eq 1) {
"Hello there!"
}
So I tried to have a batch script Daily.cmd run on start up (through the start-up folder), which runs, but I cannot get it run the Daily.ps1, and I get a message saying running scripts is disabled. (Both files are in the same directory)
powershell C:\Users\Simon\Desktop\Daily.ps1
File C:\Users\Simon\Desktop\Daily.ps1 cannot be loaded because running scripts is disabled on this system
I then tried using this line of code from a trick I learned to bypass running scripts directly:
powershell cat Daily.ps1 | powershell invoke-expression
This works but only for one liners. So I added the -raw flag for
cat, which works when in powershell, but not in CMD. For some reason, Daily.ps1's text is still stored as an array of strings. (apologies for formatting)
cmdlet Invoke-Expression at command pipeline position 1
Supply values for the following parameters:
Command: if (1 -eq 1) {
invoke-expression : At line:1 char:14
if (1 -eq 1) {
Missing closing '}' in statement block or type definition.
At line:1 char:1
invoke-expression ~~~~~~~~~~~~~~~~~
So I tried to add this to Daily.cmd:
powershell
cat -raw Daily.ps1 | powershell-invoke-expression
However, the rest of the script doesn't get executed at all once I enter PowerShell.
I don't know to get Daily.ps1 to run through a batch command. Is there a way I missed, or is one of the ways I tried faulty (without admin rights)?
Edit: To clarify, ExecutionPolicy is set to Restricted, and that cannot be changed. Additionally, I can run PowerShell scripts fine through right clicking the file and running with PS.
Create a scheduled task to run at computer startup. Put powershell.exe in the field "program/script" and -File "C:\path\to\your.ps1" in the field "arguments" (you may want to avoid placing the script in a user profile). Set the task to run whether the user is logged on or not.
I found an answer!
After trying many different methods, I came across this line of code that allows you to run PS scripts if ExecutionProperty is set to restricted:
start powershell "cat -raw C:\Users\Simon\Desktop\Daily.ps1 | invoke-expression"
This runs powershell and uses the trick of piping the results of cat -raw [file.ps1] to invoke-expression. This is useful workaround if ExecutionProperty is set to restricted.
Then you can save this line to a .cmd or .bat file and use either Task Scheduler (more customizability) or put it in the startup folder.
P.S. for everyone who kept saying change the ExecutionProperty to something other than restricted. I clearly stated multiple times that I cannot do that(not admin), nor will the Sys Admin do that, nor will it ever happen(must stay restricted) :)
When you're in file explorer you can click on File > Open Windows Powershell(or its icon in the Quick Access Toolbar) to start an instance of Powershell in the directory that your file explorer is in. I would like to then automatically run a simple command in this directory and close the Powershell window after it is done.
I have tried adding my command to my Powershell Profile but it executes before the path variable has been set and it runs with $pwd being equal to C:\Users\MyUsername (my home directory) or C:\WINDOWS\system32 (seems to be a race condition of some sort, no idea why it does one or the other). To the best of my understanding this is because the file explorer "open in powershell button" opens powershell and THEN cd's to the directory I was in in file explorer. So when the profile.ps1 is ran it is using the only directories it knows if since the cd call hasn't been made yet. This is similar to running the command start powershell.exe in cmd vs start powershell.exe -command "cd 'C:\wherever'". The former correctly runs my profile command while the latter uses the current directory of cmd and not the C:\wherever.
So, obviously the $pwd variable is being assigned at different times in the case of opening it from cmd and opening it from file explorer. Is there some way to delay the execution of a command in the profile until after the shell has fully loaded? Simply sleeping the script doesn't help.
Alternatively, if anyone knows how to edit the registry so that I can change the behavior of clicking File > Open Windows Powershell (since it must have access to some variable storing the current directory and I assume it calls the Powershell executable with this variable as an argument being cd'd to), that would work too.
Then again I could be incredibly naive about how File > Open Windows Powershell and the Powershell instantiation process works.
Any help is greatly appreciated, thank you!
I figured it out in the most hacky, gross way ever, but without easy access to Windows internals this is the only working method I could find. I set up my powershell profile to make my window title my prompt like so:
function Prompt
{
$host.ui.RawUI.WindowTitle = $(get-location)
“PS> “
}
Then I set up a task in the Task Scheduler that was triggered by powershell reaching its prompt (there are 3 possible hooks, when the console is starting up, when it starts an IPC listening thread, and when the console is ready for input). I used the Event Viewer to do this (I was going to post screenshots but I don't have 10 reputation yet).
Then, I set the action on this task to run the script shown below, which reads from the window title of my first instance of powershell
Start-Sleep -s 1
$A = Get-Process -Name powershell | Where-Object -FilterScript {$_.Id -ne $PID}
$B = $A.MainWindowTitle
& C:\Program` Files\MyProgram\MyProgram.exe "$B"
stop-process -Id $A.Id
stop-process -Id $PID
This whole menagerie of events properly runs my program with the current file explorer directory as an argument (and then closes powershell) when I click the little powershell icon on the quick access toolbar in file explorer.
Found a much cleaner and faster way to do this. All I had to do was set up my profile to look like this, no tasks or second instance of powershell required
function Prompt
{
& C:\Program` Files\MyProgram\MyProgram.exe "$pwd"
stop-process -Id $PID
}
I have a batch script called SET_ENV.bat which contains environment variables that are used by other batch scripts. Currently this SET_ENV.bat is lauched by existing batch scripts.
Now I have a need to use Powershell script and I would like to launch the same SET_ENV.bat. I managed to do this using:
cmd.exe /c ..\..\SET_ENV.bat
I know that the batch file was run because it contained an echo
echo *** Set the environment variables for the processes ***
But after looking at the environment variables, I can see that none of them have been updated. Is there something that is preventing me from updating environment variables with Powershell + batch file combo?
I have tried SET_ENV.bat directly from command line and it works. I have also tried Start-Process cmdlet with "-Verb runAs" but that didn't do any good.
Launching PowerShell again at the end of the batch commands will keep every environment variable so far.
My use case was: set up Anaconda environment, set up MSVC environment, continue with that. Problem is both Anaconda and MSCV have a separate batch script that initialises the env.
The following command starting from PowerShell will:
initialise Anaconda
initialise MSVC
re-launch PowerShell
cmd.exe "/K" '%USERPROFILE%\apps\anaconda3\Scripts\activate.bat %USERPROFILE%\apps\anaconda3 && "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat" && powershell'
Just swap the paths with what you need. Note that if the path contains spaces in needs to be inside double quotes ".
Breaking down the call above:
cmd.exe "/K": call cmd and do not exit after the commands finish executing /K
The rest is the full command, it is wrapped in single quotes '.
%USERPROFILE%\apps\anaconda3\Scripts\activate.bat %USERPROFILE%\apps\anaconda3: calls activate.bat with parameter ...\anaconda3
&& "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat": && and if the previous command didn't fail, run the MSVC vars setup file. This is wrapped in " as it has spaces in it.
&& powershell: finally run PowerShell. This will now contain all environment variables from the ones above.
Just adding a better way of doing the aforementioned setup: using Anaconda's PowerShell init script to actually get it to display the environment name on the prompt. I won't break down this as it's just a modified command above.
cmd.exe "/K" '"C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat" && powershell -noexit -command "& ''~\apps\anaconda3\shell\condabin\conda-hook.ps1'' ; conda activate ''~\apps\anaconda3'' "'
Note that the single quotes in the powershell call are all doubled up to escape them
Environment variables are local to a process and get inherited (by default at least) to new child processes. In your case you launch a new instance of cmd, which inherits your PowerShell's environment variables, but has its own environment. The batch file then changes the environment of that cmd instance, which closes afterwards and you return back to your PowerShell script. Naturally, nothing in PowerShell's environment has changed.
It works in cmd since batch files are executed in the same process, so a batch file can set environment variables and subsequently they are available, since the batch file wasn't executed in a new process. If you use cmd /c setenv.cmd in an interactive cmd session you will find that your environment hasn't changed either.
You can try another option, such as specifying the environment variables in a language-agnostic file, to be read by either cmd or PowerShell to set the environment accordingly. Or you could launch your PowerShell scripts from cmd after first running your batch file. Or you could set those environment variables under your user account to no longer have to care for them. Or you just have one setenv.cmd and one setenv.ps1 and keep them updated in sync.
Summary
Write the environment variables to file and load them after.
Example
I've included an MWE below that exemplifies this by saving and loading the VS-studio environment.
Usage
To run the script, call New-Environment. You will now be in the VS2022 environment.
How it works
The first time New-Environment is called, the VS-studio environment batch file runs, but the results are saved to disk. On returning to PowerShell the results are loaded from disk. Subsequent times just use the saved results without running the environment activator again (because it's slow). The New-Environment -refresh parameter may be used if you do want to resave the VS-studio environment again, for instance if anything has changed.
Script
NOTE: This script must be present in your powershell $profile so the second instance can access the function! Please ensure to change the VS path to reflect your own installation.
function New-Environment()
{
param (
[switch]
$refresh
)
Write-Host "Env vars now: $($(Get-ChildItem env: | measure-object).Count)"
$fileName = "$home\my_vsenviron.json"
if ((-not $refresh) -and (Test-Path $fileName -PathType Leaf))
{
Import-Environment($fileName)
return;
}
$script = '"C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvars64.bat" && '
$script += "pwsh --command Export-Environment `"$fileName`""
&"cmd.exe" "/C" $script
Import-Environment($fileName)
}
function Export-Environment($fileName)
{
Get-ChildItem env: | Select-Object name,value | ConvertTo-Json | Out-File $fileName
Write-Host "I have exported the environment to $fileName"
}
function Import-Environment($fileName)
{
Get-Content $fileName | ConvertFrom-json | ForEach-Object -process {Set-Item "env:$($_.Name)" "$($_.Value)"}
Write-Host "I have imported the environment from $fileName"
Write-Host "Env vars now: $($(Get-ChildItem env: | measure-object).Count)"
}
So here's the deal. Because of a number of... let's just say not PowerShell smart people who will be using an incredibly complex application that I just finished, I need the ability to package it in an exe wrapper.
This shouldn't be that hard
I was able to successfully use PS2EXE, except for some reason with AD, it throws out a whooooole bunch of AD text that I can't get rid of. Tried to fix that for a few days before getting frustrated and moving on.
Then, I discovered PowerGUI. I can't say that I like it, at all. However, its compiler was exactly what I was looking for! Except for the fact that Exchange 2010 snap-ins are not compatible with .NET 4.5 through this application.
I want to make it very clear that my script works perfectly on multiple different computers, but as soon as I use any of these tools, everything breaks.
An exe is the best thing that I can think of to simplify the interface, and keep the Technically Intellectually Stunted from breaking everything, or running to me with every little error because they somehow got into the code and typed something and saved it, and now nothing works and it's the end of the world and they have no idea what happened.
If you guys know of any tools to wrap this up into an exe, or have any other ideas on how to help, I would really appreciate anything you guys can give me.
You have never failed me in the past!
From my point of view if you really want an EXE file you should write a .NET application, it's not so hard to embed PowerShell CmdLets.
In order to avoid end user modifying your code I know two solutions :
First : set execution policy to AllSigned on the user computer and sign the scripts you deploy. You can manage to use our own certificates (not expensive at all) or public certificates (more expensive). One of the drawback of this solution is that it does not prevent users from seeing the code. Another big drawback is that a PKI and sign code infrastructure is a lot of wast time.
Second : for non interactive scripts (be carefull it's a kind of makeshift job) :
Create a new user account
Only allow access to the script file for the new account.
Set up a task in the Windows scheduler to run that script file with PowerShell under that specific account. The permissions for the scheduled tasks allow read and execute access to the user(s). Then set the task to "disabled".
Whenever the script file needs to be run, the corresponding task is manually started by the user.
Using this solution will also allow you to remote execute your script.
When I had a similar deployment problem - 1) user's didn't know powershell 2) I didn't want them to have to understand things like execution policy, 3) how to start PS, 4) etc. I wrapped it in a batch file. I also wanted to make sure that experienced PS users still had the capabilities of PS, so the batch file determined if it was running under PS or not and ran in the current PS session if applicable. I was never too worried that users would mess with the script - they were happy if it "just worked". So whether users liked Explorer, CMD.EXE, or PS, they all were accommodated.
The batch file I wrote first runs a bit of powershell code to determine if the process of the batch file is the grandchild of a powershell process. If it is then the batch file is being invoked from PS. The execution policy is also checked and if it is lenient enough then Wscript.SendKeys is used to send keystrokes to PS to get the script running in the current PS session. If it isn't then it starts a new PS session using -ExecutionPolicy parameter and passes the script as a command line argument (-Command).
This bit of powershell code communicates back to the .CMD file using a return code. Sorry it's cryptic, but the length of command line parameters is limited. Here's the code:
set scr= $mp=[diagnostics.process]::getcurrentprocess().id
set scr=%scr%; $pp=([wmi]\"win32_process.handle='$mp'\").parentprocessid
set scr=%scr%; $gp=([wmi]\"win32_process.handle='$pp'\").parentprocessid
set scr=%scr%; $ep=[int][microsoft.powershell.executionpolicy](get-executionpolicy)
set scr=%scr%; try {$pnp=1-[int](([wmi]\"win32_process.handle='$gp'\").Name -eq \"powershell.exe\")
set scr=%scr%; } catch {$pnp=1}
set scr=%scr%; $ev = (8 * $pnp + $ep) -band 0xB; %wo% pp: $pp gp: $gp ev: $ev; if ($ev -le 1) {
set scr=%scr% %wo% Launching within existing powershell session...`n;
set scr=%scr% $w=new-object -com wscript.shell;$null=$w.appactivate($gp);
set scr=%scr%; $w.sendkeys(\"^&{{}`$st =cat "%me%";`$sc=`$st -join [char]10 -split 'rem PS script';
set scr=%scr% `$script:myArgs = `\" %*`\";`$sb=[scriptblock]::create{(} `$sc[3]{)};. `$sb{}}~\")
set scr=%scr%; }
set scr=%scr%; exit $ev
powershell -noprofile -Command %scr%
%wo% is to allow debugging this "checker script". If debugging is on the %wo% is set to write-host. Otherwise it is set to define a "null" function and then invoke the null function. The null doesn't do anything so the message that is the argument to the function is not output.
Note the escaping when invoking SendKeys. ^ is the CMD.EXE escape character and SendKeys has it's own escape mechanism, as does PS.
If run from PS you end up in a PS session thanks to SendKeys. Otherwise the batch file does this:
set scr= ren function:prompt prompto
set scr=%scr%; function prompt{ 'myApp: '+(prompto)}
set scr=%scr%; $st= (cat %me%) -join \"`n\";
set scr=%scr%; $sx=($st -split 'rem PS script')
set scr=%scr%; $sc=$sx[3]
set scr=%scr%; %wo% myArgs: $myArgs script length: $sc.length
set scr=%scr%; ^&{$script:myArgs=\"%*\"; iex $sc}
title MyApp
rem Change the number of lines on the console if currently set to 25
for /f "tokens=2" %%i in ('mode con^|findstr Lines:') do if %%i LEQ 25 (mode con lines=50&color 5F)
powershell -noexit -noprofile -command "%scr%"
This "helper script" also can't be too long. So the helper script reads the original .CMD file and then splits it by using the string 'rem PS script'. That string will be in both this helper script as well as in the batch file (separating the batch file statements from PS statements). In my case the string is also in the batch file comments, so that is why the index of 3 is used.
Your PS script can define functions or a module. Your PS script can also output some introductory info to explain to users how to get started, how to get help, or whatever you want.
Rather than just using the PS command line, your PS script could create it's own interactive environment (using Read-Host for example). However I didn't want to do that because it would have prevented experienced PS users from using their knowledge about PS. For example if your script requires a username/password, an experienced PS user could use get-credential to create a credential to send to your script.