Configure ZSH to always fall back to path completion - zsh-completion

I'm using zsh, and the completions mostly just work. But when I use various commands where an arbitrary command follows a command with specialized zsh handling, file path completions don't work.
# completes to 'npm run build' for a package.json with a 'build' script
npm run b<tab>
# does not complete the file path (to, say, `datafile/somefile.json`)
npm run build -- datafile/som<tab>
Obviously, npm has a configured completion definition that loads the package.json and understands valid args. But after -- the remaining args are just passed through, and I'd like file completion to be the default.
Is there a way to have zsh always fallback to path completion OR maybe have the -- indicator commonly used to indicate end or arguments to turn back on default completions.

Related

Rundeck: see what is actually executed on the commandline

I'm just getting started with rundeck and trying to find out how it works.
I created a simple Job that should install some packages on the remote note from a pre-selected list (Option).
When I select more than one option the command fails. I want to find out why it fails but (even with debug-mode enabled) see nowhere which command is actally being executed on the remote node.
My command looks like yum install -y "${option.package}" and the unexpected response is eg: no package [selected options] available ... I have selected (space) as delimitter.
How can I see what is executed on the remote host?
Update:
I meanwhile found out why my options did not work as expected; I had to use the unqouted variant for the command-line. But the main question still stays the same ...
Right now the only way to see the exact executed command is to run the job on debug mode. Just select "Run with Debug output" and you can see the command dispatched in the middle of the execution output.

Pass values from cmd to npm script

I've searched for days but did not find an answer that worked for my problem.
I want to run a npm script through cmd or Powershell in Windows and pass values for script variables.
I would like the bellow script in package.json:
"scripts": {
"happy-birthday": "echo Happy birthday $NAME and many returns!"
}
To output:
Happy birthday Danny and many returns!
With a command like:
npm run happy-birthday --NAME=Danny
Everything I tested so far gives me:
Happy birthday $NAME and many returns!
It feels like npm does not recognize this as a variable and prints it like it is a string. I also tested %NAME%.
Npm version - 6.12.1
You can't pass arguments to the middle of npm scripts, argument(s) can only be passed to the end of them. See my answer here for further explanation.
Given your example, consider the following solution which will work successfully across all platforms:
In your package.json file define your happy-birthday npm script as follows:
"scripts": {
"happy-birthday": "node -e \"console.log('Happy birthday %s and many returns!', process.argv[1] || 'Jane')\""
}
Then run the following command via cmd or Powershell (or any other command line tool).
npm run happy-birthday -- Danny
This will print:
Happy birthday Danny and many returns!
Note: If you just run the following command, i.e. without passing an argument:
npm run happy-birthday
It will print the default name instead:
Happy birthday Jane and many returns!
Explanation:
The npm script utilizes the nodejs command line option -e to evaluate the inline JavaScript as follows:
console.log('Happy birthday %s and many returns!', process.argv[1] || 'Jane')
The arguments passed via the CLI, e.g. Danny, are read using process.argv - whereby we reference the Array element at index 1.
The Logical OR Operator, i,e. || is utilized to return Jane when no argument is passed.
Edit: Setting environment variables instead
Alternatively may want to consider setting an environment variable and referencing it in your npm script.
In your npm script define your happy-birthday npm script as follows:
"happy-birthday": "echo Happy birthday %NAME% and many returns!"
Note the %NAME% notation used by Windows only to reference the variable.
Using cmd
When using cmd (i.e. Command Prompt) you'll need to run the following command:
set NAME=Danny&& npm run happy-birthday
Using Powershell
When using Powershell you'll need to run the following command instead:
$env:NAME="Danny" ; npm run happy-birthday
Note: The default shell that npm utilizes for npm scripts is sh on *nix and cmd on windows. Therefore the aforementioned methods defined in steps 1 and 2 will fail on *nix platforms.
If cross-platform support is requirement and you do want to take this approach of setting environment variables and referencing them via npm scripts, then consider utilizing the cross-env package.

NPM custom arguments produces Powershell error

I am trying to launch a npm script with a custom argument:
"publish-local": "ng build $PROJECT && cd dist/$PROJECT && npm publish --registry=http://my.local.npm.registry"
This is how I am trying to call it from the prompt:
PROJECT=my-lib npm run publish-local
This is how I have seen it should work on different web sources (for example:here)
Anyway, trying to do that, I get this error:
PROJECT=my-lib: The term 'PROJECT=my-lib' is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.
At line:1 char:1
What to do?
Short answer: The example(s) that you've seen that "should work" will only work on *nix. They do not work via PowerShell, nor via Command Prompt on Windows.
Given that you're wanting to pass an argument to a npm-script, whereby that argument is consumed two times in the middle of that script I suggest you consider the following approach instead:
The following suggested approach is very similar to my answer here.
Solution - Cross-platform:
For a cross-platform solution, (one which works successfully with *nix, Windows Command Prompt and PowerShell etc..), you'll need to utilize a nodejs helper script.
Let's name the nodejs script publish-local.js and save it in the projects root directory, at the same level as package.json.
publish-local.js
const execSync = require('child_process').execSync;
const arg = process.argv[2] || 'my-lib'; // Default value `my-lib` if no args provided via CLI.
execSync('ng build ' + arg + ' && cd dist/' + arg +
' && npm publish --registry=http://my.local.npm.registry', {stdio:[0, 1, 2]});
package.json
Configure your publish-local script to invoke publish-local.js as follows:
...
"scripts": {
"publish-local": "node publish-local",
},
...
Running publish-local script:
To invoke publish-local via your CLI you'll need to run:
npm run publish-local -- my-lib
Notes:
Inside publish-local.js take note of the line that reads:
const arg = process.argv[2] || 'my-lib'; // Default value `my-lib` if no args provided via CLI.
It specifies a default value to use when no argument is provide via the CLI.
So, If you were to currently run the npm script without passing an argument:
npm run publish-local
or run it with passing an argument:
npm run publish-local -- my-lib
They are essentially the same. However if you were to provide an argument that is different to my-lib, i.e. one that is different to the default specified in publish-local.js, it will take precedence. For example:
npm run publish-local -- some-other-lib
For a further understanding of this solution I suggest you read my answer that I previously linked to.
The default shell used by npm is cmd.exe on Windows, and sh on *nix - this given solution will work successfully with either.
If you only intend to use/support more recent versions of node.js that support ecmascript-6 features, such as destructuring, template literals then you could refactor publish-local.js as follows:
publish-local.js (refactored using ES6 features)
const { execSync: shell } = require('child_process');
const [ , , projectName='my-lib' ] = process.argv;
shell(`ng build ${projectName} && cd dist/${projectName} && npm publish --registry=http://my.local.npm.registry`, {stdio:[0, 1, 2]});

How to pass arguments to memcheck with ctest?

I want to use ctest from the command line to run my tests with memcheck and pass in arguments for the memcheck command.
I can run ctest -R my_test to run my test, and I can even run ctest -R my_test -T memcheck to run it through memcheck.
But I can't seem to find a way to pass arguments to that memcheck command, like --leak-check=full or --suppressions=/path/to/file.
After reading ctest's documentation I've tried using the -D option with CTEST_MEMCHECK_COMMAND_OPTIONS and MEMCHECK_COMMAND_OPTIONS. I also tried setting these as environment variables. None of my attempts produced any different test command. It's always:
Memory check command: /path/to/valgrind "--log-file=/path/to/build/Testing/Temporary/MemoryChecker.7.log" "-q" "--tool=memcheck" "--leak-check=yes" "--show-reachable=yes" "--num-callers=50"
How can I control the memcheck command from the ctest command line?
TL;DR
ctest --overwrite MemoryCheckCommandOptions="--leak-check=full --error-exitcode=100" \
--overwrite MemoryCheckSuppressionFile=/path/to/valgrind.suppressions \
-T memcheck
Explanation
I finally found the right way to override such variables, but unfortunately it's not easy to understand this from the documentation.
So, to help the next poor soul that needs to deal with this, here is my understanding of the various ways to set options for memcheck.
In a CTestConfig.cmake in you top-level source dir, or in a CMakeLists.txt (before calling include(CTest)), you can set MEMORYCHECK_COMMAND_OPTIONS or MEMORYCHECK_SUPPRESSIONS_FILE.
When you include(CTest), CMake will generate a DartConfiguration.tcl in your build directory and setting the aforementioned variables will populate MemoryCheckCommandOptions and MemoryCheckSuppressionFile respectively in this file.
This is the file that ctest parses in your build directory to populate its internal variables for running the memcheck step.
So, if you'd like to set you project's options for memcheck during cmake configuration, this is the way to got.
If instead you'd like to modify these options after you already have a properly configured build directory, you can:
Modify the DartConfiguration.tcl directly, but note that this will be overwritten if cmake runs again, since this file is regenerated each time cmake runs.
Use the ctest --overwrite command-line option to set these memcheck options just for that run.
Notes
I've seen mentions online of a CMAKE_MEMORYCHECK_COMMAND_OPTIONS variable. I have no idea what this variable is and I don't think cmake is aware of it in any way.
Setting CTEST_MEMORYCHECK_COMMAND_OPTIONS (the variable that is actually documented in the cmake docs) in your CTestConfig.cmake or CMakeLists.txt has no effect. It seems this variable only works in "CTest Client Scripts", which I have never used.
Unfortunately, both MEMORYCHECK_COMMAND_OPTIONS and MEMORYCHECK_SUPPRESSIONS_FILE aren't documented explicitly in cmake, only indirectly, in ctest documentation and the Testing With CTest tutorial.
When ctest is run in the build, it parses the file to populate its internal variables:
https://cmake.org/cmake/help/latest/manual/ctest.1.html#dashboard-client-via-ctest-command-line
It's not clear to me how this interacts with

Executing subprocess.Popen inside Python script in PyDev context is different than running in terminal

I'm executing this code:
p = subprocess.Popen(['/path/to/my/script.sh','--flag'] , stdin=subprocess.PIPE)
p.communicate(input='Y')
p.wait()
It works when executing it on the shell using "python scriptName.py",
BUT when executing using PyDev in Eclipse, it fails, the reason:
/path/to/my/script.sh: line 111: service: command not found
This bash script "script.sh" contains the following command which causes the error:
service mysqld restart
So "service" is not recognized when running the .sh script from the context of PyDev.
I guess it has to do with some ENV VAR configurations, couldn't find how to do it.
BTW - Using "shell=True" when calling subprocess.Popen didn't solve it.
service usually is located in /usr/sbin, and that this directory isn't on the PATH. As this usually contains administrative binaries and scripts which arn't designed to be run by everyone (only by admins/root), the sbin directories arn't always added to the PATH by default.
To check this, try to print PATH in your script (or add an env command).
To fix it, you could either
set the PATH in your python script using os.setenv
pass an env dict containing the correct PATH to Popen
set the PATH in your shellscript
use the full path in your shellscript
set the PATH in eclipse