How does dotenv cli with "--" (double-dash) running commands? - powershell

Im my project i am trying to use dotenv-cli with pnpm. I am using PowerShell 7.2.1 on windows. I have monorepo with package api with script dev in package.json.
First what I tried was:
dotenv -e .\.env -- pnpm dev --filter api
And it did not work:
 ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL  not found: dev
But when I tried:
dotenv -e .\.env -- pnpm -- dev --filter api
It worked well.
As I had read -- signifies the end of command options, after which only positional arguments are accepted. So why do I need to use it twice for my command to work? Why is it working like that?

The problem is that when you call from PowerShell (unlike from cmd.exe), the command name dotenv resolves to a PowerShell script, namely dotenv.ps1, as you report.
When PowerShell calls a PowerShell-native command - including .ps1 files - its own parameter binder interprets the (first) -- argument and removes it; that is, the target command never sees it.
(The semantics of -- is analogous to that of Unix utilities: -- tells the parameter binder to treat subsequent arguments as positional ones, even if they look like parameter (option) names, such as -foo.)
Thus, unfortunately, you need to specify -- twice in order to pass a single -- instance through to the .ps1 script itself:
# The first '--' is removed by PowerShell's parameter binder.
# The second one is then received as an actual, positional argument by
# dotenv.ps1
dotenv -e .\.env -- -- pnpm dev --filter api
Alternatively, assuming that dotenv.cmd, i.e. a batch file version of the CLI exists too (and is also in a directory listed in $env:PATH), you can bypass this problem by calling it explicitly, instead of the .ps1; when calling external programs (including scripts interpreted by other shells / scripting engines, such as cmd.exe), PowerShell does not remove --:
# Calling the batch-file form of the command doesn't require
# passing '--' twice.
dotenv.cmd -e .\.env -- pnpm dev --filter api
Caveat: While it will typically not matter, the way a batch file parses its arguments differs from how PowerShell does it for its native commands.

Related

How to run Bash commands with a PowerShell Core alias?

I am trying to run a Bash command where an alias exists in PowerShell Core.
I want to clear the bash history. Example code below:
# Launch PowerShell core on Linux
pwsh
# Attempt 1
history -c
Get-History: Missing an argument for parameter 'Count'. Specify a parameter of type 'System.Int32' and try again.
# Attempt 2
bash history -c
/usr/bin/bash: history: No such file or directory
# Attempt 3
& "history -c"
&: The term 'history -c' is not recognized as the name of a cmdlet, function, script file, or operable program.
It seems the issue is related to history being an alias for Get-History - is there a way to run Bash commands in PowerShell core with an alias?
history is a Bash builtin, i.e. an internal command that can only be invoked from inside a Bash session; thus, by definition you cannot invoke it directly from PowerShell.
In PowerShell history is an alias of PowerShell's own Get-History cmdlet, where -c references the -Count parameter, which requires an argument (the number of history entries to retrieve).
Unfortunately, Clear-History is not enough to clear PowerShell's session history as of PowerShell 7.2, because it only clear's one history (PowerShell's own), not also the one provided by the PSReadLine module used for command-line editing by default - see this answer.
Your attempt to call bash explicitly with your command - bash history -c - is syntactically flawed (see bottom section).
However, even fixing the syntax problem - bash -c 'history -c' - does not clear Bash's history - it seemingly has no effect (and adding the -i option doesn't help) - I don't know why.
The workaround is to remove the file that underlies Bash's (persisted) command history directly:
if (Test-Path $HOME\.bash_history) { Remove-Item -Force $HOME\.bash_history }
To answer the general question implied by the post's title:
To pass a command with arguments to bash for execution, pass it to bash -c, as a single string; e.g.:
bash -c 'date +%s'
Without -c, the first argument would be interpreted as the name or path of a script file.
Note that any additional arguments following the first -c argument would become the arguments to the first argument; that is, the first argument acts as a mini-script that can receive arguments the way scripts usually do, via $1, ...:
# Note: the second argument, "-", becomes $0 in Bash terms,
# i.e. the name of the script
PS> bash -c 'echo $0; echo arg count: $#' self one two
self
arg count: 2

Difference between '-- /bin/sh -c ls' vs 'ls' when setting a command in kubectl?

I am bit confused with commands in kubectl. I am not sure when I can use the commands directly like
command: ["command"] or -- some_command
vs
command: [/bin/sh, -c, "command"] or -- /bin/sh -c some_command
I am bit confused with commands in kubectl. I am not sure when I can use the commands directly
Thankfully the distinction is easy(?): every command: is fed into the exec system call (or its golang equivalent); so if your container contains a binary that the kernel can successfully execute, you are welcome to use it in command:; if it is a shell built-in, shell alias, or otherwise requires sh (or python or whatever) to execute, then you must be explicit to the container runtime about that distinction
If it helps any, the command: syntax of kubernetes container:s are the equivalent of ENTRYPOINT ["",""] line of Dockerfile, not CMD ["", ""] and for sure not ENTRYPOINT echo this is fed to /bin/sh for you.
At a low level, every (Unix/Linux) command is invoked as a series of "words". If you type a command into your shell, the shell does some preprocessing and then creates the "words" and runs the command. In Kubernetes command: (and args:) there isn't a shell involved, unless you explicitly supply one.
I would default to using the list form unless you specifically need shell features.
command: # overrides Docker ENTRYPOINT
- the_command
- --an-argument
- --another
- value
If you use list form, you must explicitly list out each word. You may use either YAML block list syntax as above or flow list syntax [command, arg1, arg2]. If there are embedded spaces in a single item [command, --option value] then those spaces are included in a single command-line option as if you quoted it, which frequently confuses programs.
You can explicitly invoke a shell if you need to:
command:
- sh
- -c
- the_command --an-argument --another value
This command is in exactly three words, sh, the option -c, and the shell command. The shell will process this command in the usual way and execute it.
You need the shell form only if you're doing something more complicated than running a simple command with fixed arguments. Running multiple sequential commands c1 && c2 or environment variable expansion c1 "$OPTION" are probably the most common ones, but any standard Bourne shell syntax would be acceptable here (redirects, pipelines, ...).

How can I solve this I suppose a MS PowerShell parsing error?

When I used the command below [1] to set my configuration variable MONGODB_URI, it gives an error [2].
I am using Windows PowerShell.
[1] >> heroku config:set MONGODB_URI='mongodb+srv://myprojectname:<mypassword>#cluster0.rkitj.mongodb.net/<myusername>?retryWrites=true&w=majority'
[2] The system cannot find the file specified.
'w' is not recognized as an internal or external command,
operable program or batch file.
Note: myprojectname, mypassword and myusername are placeholders for the actual value.
It looks like the heroku CLI entry point is a batch file, as implied by the wording of the error messages, which are cmd.exe's, not PowerShell's.
PowerShell doesn't take the special parsing needs of batch files (cmd.exe) into account when it synthesizes the actual command line to use behind the scenes, which involves re-quoting, using double quotes only, and only when PowerShell thinks quoting is needed.
In this case PowerShell does not double-quote (because the value contains no spaces), which breaks the batch-file invocation.
You have the following options:
You can use embedded quoting so as to ensure that the value part of your MONGODB_URI=... key-value pair is passed in double quotes; note the '"..."' quoting:
heroku config:set MONGODB_URI='"mongodb+srv://myprojectname:<mypassword>#cluster0.rkitj.mongodb.net/<myusername>?retryWrites=true&w=majority"'
Caveat: This shouldn't work, and currently only works because PowerShell's passing of arguments to external program is fundamentally broken as of PowerShell 7.1 - see this answer. Should this ever get fixed, the above will break.
If your command line doesn't involve any PowerShell variables and expressions, you can use --%, the stop-parsing symbol, which, however, in general, has many limitations (see this answer); essentially, everything after --% is copied verbatim to the target command line, except for expanding cmd.exe-style environment-variable references (e.g., %USERNAME%):
heroku config:set --% MONGODB_URI="mongodb+srv://myprojectname:<mypassword>#cluster0.rkitj.mongodb.net/<myusername>?retryWrites=true&w=majority"
If you're willing to install a module, you can use the ie function from the PSv3+ Native module (install with Install-Module Native from the PowerShell Gallery in PSv5+), which internally compensates for all of PowerShell's argument-passing and cmd.exe's argument-parsing quirks (it is implemented in a forward-compatible manner so that should PowerShell itself ever get fixed, the function will simply defer to PowerShell); that way, you can simply focus on meeting PowerShell's syntax requirements, and let ie handle the rest:
# 'ie' prepended to an invocation that uses only PowerShell syntax
ie heroku config:set MONGODB_URI='mongodb+srv://myprojectname:<mypassword>#cluster0.rkitj.mongodb.net/<myusername>?retryWrites=true&w=majority'

Launch external program with multiple commands via Powershell

I am currently attempting to launch a different console (.exe) and pass multiple commands; while starting and entering a command works just fine, I haven't been able to find out how multiple ones can be entered via powershell.
& "C:\Program Files\Docker Toolbox\start.sh" docker-compose up -d --build
The given command works fine, but as mentioned I need to pass more than one command - I tried using arrays, ScriptBlocks and different sequences, though to no avail.
Edit:
Noticed that the docker build has a -f tag which allows me to specify a file; however, the issue now seems to be that the executed cmd removes all backslashes & special characters, rendering the path given useless.
Example:
&"C:\Program Files\Docker Toolbox\start.sh" 'docker-compose build -f
path\to\dockerfile'
will result in an error stating that "pathtodockerfile" is an invalid path.
Your start.sh needs to be able to handle multiple arguments. This doesn't look like a PowerShell question
Turns out that it was easier than expected; Solved it by executing a seperate file that contained the two commands needed and passing it to the start.sh file.
&"C:\Program Files\Docker Toolbox\start.sh" './xyz/fileContainingCommands.sh'

How can I find out what script, program, or shell executed my Perl script?

How would I determine what script, program, or shell executed my Perl script?
Example: I might want to have human readable output if executed from shell (customized for each type of shell), a different type of output if called as a script from another perl script, and a machine readable format if executed from a program such as a continuous integration server.
Motivation: I have a tool that changes its output based on which shell executes it. I'd normally implement this behavior as an option to the script, but this tool's design doesn't allow for options. Other shells have environment variables that indicate what shell is running. I'm working on a patch to support Powershell, which has no such special variable.
Edit: Many of these answers happen to be linux specific. Unfortuantely, Powershell is for Windows. getppid, the $ENV{SHELL} variable, and shelling out to ps won't help in this case. This script needs to run cross-platform.
You use getppid(). Take this snippet in child.pl:
my $ppid = getppid();
system("ps --no-headers $ppid");
If you run it from the command line, system will show bash or similar (among other things). Execute it with system("perl child.pl"); in another script, e.g. parent.pl, and you will see that perl parent.pl executed it.
To capture just the name of the process with arguments (thanks to ikegami for the correct ps syntax):
my $ppid = getppid();
my $ps = `ps --no-headers -o cmd $ppid`;
chomp $ps;
EDIT: An alternative to this approach, might be to create soft links to your script, make the different contexts use different links to access your script and inspect $0 to build logic around that.
I would suggest a different approach to accomplish your goal. Instead of guessing at the context, make it more explicit. Each use case is wholly separate, so have three different interfaces.
A function which can be called inside a Perl program. This would likely return a Perl data structure. This is far easier, faster and more reliable than parsing script output. It would also serve as the basis for the scripts.
A script which outputs for the current shell. It can look at $ENV{SHELL} to discover what shell is running. For bonus points, provide a switch to explicitly override.
A script which can be called inside a non-Perl program, such as your continuous integration server, and issue machine readable output. XML and/or JSON or whatever.
2 and 3 would be just thin wrappers to format the data coming out of 1.
Each is tailored to fit its specific need. Each will work without heuristics. Each will be far simpler than trying to guess the context and what the user wants.
If you can't separate 2 and 3, have the continuous integration server set an environment variable and look for it.
Depending on your environment, you may be able to pick it up from the environment variables. Consider the following code:
/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);' | grep sh
On my Ubuntu system, it gets me:
'SHELL' => '/bin/bash',
So I guess that says I'm running perl from a bash shell. If you use something else, the SHELL variable may give you a hint.
But let's say you know you're in bash, but perl is run from a subshell. Then try:
/bin/sh -c "/usr/bin/perl -MData::Dumper -e 'print Dumper(\%ENV);'" | grep sh
You will find:
'_' => '/bin/sh',
'SHELL' => '/bin/bash',
So the shell is still bash, but bash has a variable $_ which also show the absolute filename of the shell or script being executed, which may also give a valuable hint. Similarily, for other environments there will most probably be clues left in the perl %ENV hash that should give you valuable hints.
If you're running PowerShell 2.0 or above (most likely), you can infer the shell as a parent process by examining the environment variable %psmodulepath%. By default, it points to the system modules under %windir%\system32\windowspowershell\v1.0\modules; this is what you would see if you examine the variable from cmd.exe.
However, when PowerShell starts up, it prepends the user's default module search path to this environment variable which looks like: %userprofile%\documents\windowspowershell\modules. This is inherited by child processes. So, your logic would be to test if %psmodulepath% starts with %userprofile% to detect powershell 2.0 or higher. This won't work in PowerShell 1.0 because it does not support modules.
This is on Windows XP with PowerShell v2.0, so take it with a grain of salt.
In a cmd.exe shell, I get:
PSModulePath=C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\
whereas in the PowerShell console window, I get:
PSModulePath=E:\Home\user\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsP
owerShell\v1.0\Modules\
where E:\Home\user is where my "My Documents" folder is. So, one heuristic may be to check if PSModulePath contains a user dependent path.
In addition, in a console window, I get:
!::=::\
in the environment. From the PowerShell ISE, I get:
!::=::\
!C:=C:\Documents and Settings\user