Is there a single argument alternative to the double verbose options for pytest runs? It's a little deceptive looking seeming like a merge error or a benign redundant typo when it shows up in shell script source.
I'm not sure the evolution of the verbosity inputs, but its showing up in our repo from tips like the following that pytest gives upon failures.
...Full output truncated (19 lines hidden), use '-vv' to show
Would be nice if there was something like --verbose2 or something of the sort.
Turns out there is such an option, it's under verbosity.
--verbosity=VERBOSE
and tracing around the codebase seems to imply that 0,1,2 are valid values. There doesn't appear to be any documentation on that point however.
Looking at the command argument definition I can also see the behavior and pattern they're utilizing, which is the count action which sheds new light on redundant arguments; I didn't even realize that was a pattern.
group._addoption(
"-v",
"--verbose",
action="count",
default=0,
dest="verbose",
help="increase verbosity.",
)
So it would seem that
"-vv" === "--verbose --verbose" === "--verbosity=2"
Related
In my previous questions here on stack we determined my command should run like this.
(& C:\Gyb\Gyb.exe --email $DestinationGYB --action restore --local-folder $GYBFolder --label-restored $GYBLabel --service-account)
The problem with this is if I run that same command in a command prompt I would see a bunch of status information.
When I run the command as above all I see in VSCode is it ran that line and its waiting. How can I make it show me like the command prompt without opening a new window?
here is GYB
https://github.com/jay0lee/got-your-back
Remove the parentheses () around your command if you want to see the output at the same time. Otherwise, this behavior is expected and is not unique to the VSCode terminal.
The group-expression operator () is used to control the order of which code is executed in PowerShell. Expressions are evaluated like order of operations (re: PEMDAS) in Mathematics, the inner-most parentheses get evaluated first. You can also use the group-expression operator to invoke a property or method from the returned expression in the group.
The problem is, group-expressions don't output to the parent level directly, that only happens when the group-expression is done executing. So when you have something that can run for several minutes or even hours like gyb.exe, you don't see that output until the command exits and execution continues.
Contrast this to running outside of the group-expression; as STDOUT is written to the success stream the success stream is immediately written to console as it comes. There is no additional mechanism you are proxying your output through.
Note: You will experience nearly the same behavior with the sub-expression operator $() as well, although do not conflate sub-expressions and group-expressions as they serve different purposes. Here is a link to the official explanation of theGrouping Operator ( ), the Subexpression Operator `$( ) is explained immediately below it.
I am new to Coverity,I am using it from the command prompt with it's .exe files.So I want to pass specific macros in coverity cov-build.exe so that those macros will be implemented when cov-emit.exe(when it is called by cov-build.exe) is parsing the .c files.Till now I have tried the below stated configurations.
code-build.exe Intermediate_folder --delete-stale-tus --preprocessor-first --return-emit-failure "My_bat_file" -- -D My_macro_name=my_macro_body
So any help will be much be appreciated.I am stuck on this.
Thanks and regards,
Newbie_in
cov-build wraps your existing build command, monitors it and spawns parallel compiler invocations in order to understand your code. These parallel compiler invocations will see the same command line being passed to your own compiler.
So if you want this define to take effect for your compiler as well as Coverity's then you should simply just add it to your build the way you would normally and Coverity will see it.
If you want to add a define that only Coverity's compiler can see, this is best done with within the config for your compiler.
You can either edit the config directly (add
<append_arg>-Dmy_macro_name=my_macro_body</append_arg>
after the <begin_command_line_config> line), or re-configure using --xml-option.
For example, if you're using the shortcut gcc config this would look like this:
$ cov-configure --gcc --xml-option=append_arg>-Dmy_macro_name=my_macro_body.
I noticed you're using --preprocess-first on the cov-build command line - I recommend against this, as it destroys XREFs making it much more difficult to browse defect information, as well as makes the analysis unable to find some defects (i.e. ones that are due to macros). --preprocess-next behaves like --preprocess-first and will only fire if the initial compilation attempt fails, so if you're using --preprocess-first to work around compilation issues, I strongly recommend using --preprocess-next instead.
If you do have compilation issues, it's always good to report them (along with a reproducer) to Coverity support so that they can be fixed in future releases.
I am trying to figure out the expression syntax for py.test selection using the '-k' option.
I have seen the examples, but I am unclear of what the syntax options are when using the 'k' tag.
I am trying to scan the py.test source code, but so far no luck.
Can anyone give me pointers on what the syntax is for py.test test selection (-k)?
Mmm.. it's not well documented mainly because it's a bit confused and not that well defined. You can use 'and', 'or' and 'not' to match strings in a test name and/or its markers. At heart, it's an eval.
For the moment (until the syntax is hopefully improved) my advice is to:
Use --collectonly to confirm that your -k selects what you want before executing tests
Add markers to tests as needed to further distinguish them.
When using IPython, it's often convenient to see how long your commands take to run by using the %time magic function. When you use this often enough, you start to wish that you could just get toggle a setting to get this metadata by default whenever you enter a query. Psql lets you do this with \timing. GHCi lets you do this with :set s+. Does IPython let you do this? And if not, why not?
The "ipythonic" way of timing code is using the %timeit or %%timeit magic function (respectively for online, and multi-line code).
These functions provide quite accurate results by running the code multiple times (the exact number is adaptive if not specified).
The global flag you are asking does not exist in ipython. Furthermore, just adding %%timeit to all the cells will not work because global variables are not modified when calling %%timeit. This "feature" is directly inherited from the timeitmodule.
For more info see this ipython issue.
What patterns contribute or detract from the usability of a CLI interface?
As an example consider the CLI for ClearCase. The CLI is very comprehensive (+1) but it is has several glaring opportunities. Recently, I wanted to force the files to lower case into ClearCase using clearfsimport. Unfortunately I wound up on the documentation for its cousin clearimport. It may seem slight but it cost me more hours than I care to admit. The variation in the middle got me.
Why provide such nearly identical functionality with such nearly identical names? There are many better options in my opinion
clearimport -fs
fsclearimport
clear_fs_import
clearimport_fs
Anything would be better than what they went with. The code I am working on IS a CLI and this experience made me look at my own choices. I think I have all the basics covered (standard help, long-form vs short-form, short meaningful names, providing examples, eliminate ambiguity, accurately handling spaces within quotes, etc).
There is some literature on this subject.
Perhaps a bad CLI is no different than a bad API. CLI are type of an API in some sense. The goals are naturally common:: flexibility, readability, and completeness. Several factors differentiate CLI from a typical API. One is that CLI needs to support scriptability (participate many times perhaps in a series of pipes). Another is that autocompletion and namespaces don't exist in the same way. You don't always have a nice colorful GUI doing stuff for you. CLIs must document themselves externally to customer directly. And finally the audience of a CLI is vastly different than the standard API. I appreciate any insight you may have.
I like the subcommand pattern, which I'm most familiar with as its implemented in the command-line Subversion client.
svn [subcommand] [options] [files]
Without the subcommands, subversion would have waaaaay too many different options for me to remember them effectively, and the help system would be a pain to slog through.
But, if I don't remember how any particular subcommand works, I can just type:
svn help [subcommand]
...and it shows me only the relevant portions of the help documentation.
As noted above, this format:
[master verb] [subverb] [optionally, noun] [options]
is good in terms of remembering what commands are available. cvs, svn, Perforce, git, all adhere to this. It improves discoverability of commands, a major CLI problem. One wrinkle that occurs here is options for the master-verb vs. options for the subverb. I.e.,
cvs -d dir command bar
is different than
cvs command -d dir bar
This was a confusing situation in cvs, which svn "fixed" by allowing options specified in any order. Your own solution may vary; if you have a very good reason to pass options to the master verb, okay, just be aware of the overhead.
Looking to API usability is a good idea too, but beware that there is no real typing in CLI commands, and there is a lot of richness in what CLI commands 'return', since you've got both a return code and an output to work with. In the unixy/streams world, the output is usually much more important than the return code. Getting the format of your output right is crucial. Also, while tempting, I've found that sending different things to stdout vs. stderr is not always useful; it confuses novice and even intermediate users (because they both get dumped to console in most cases), and rarely is useful advanced users. So unless there's a real need for it I avoid it; it's too easy for (e.g.) someone to get very confused about why the output of a command was '' in an error condition just because the programmer nicely dumped the errors to stderr.
Another issue in design is the "what next" problem. In a GUI, the next steps for the user are spelled out by the available buttons, menus, etc. In a CLI, the user can literally type any command next, and pipe any command to any other. (Or try, at least.) I design my commands to give hints (either in the help or the output) as to what potential next steps might be in a typical workflow.
Another good pattern is allowing user customization of the output. While it is possible for users to use cut, sort, etc. to tailor the output, being able to specify a format string magnifies the utility of a command. The example I cite here is top, which lets you tell it which columns you want.