I want to use ctest from the command line to run my tests with memcheck and pass in arguments for the memcheck command.
I can run ctest -R my_test to run my test, and I can even run ctest -R my_test -T memcheck to run it through memcheck.
But I can't seem to find a way to pass arguments to that memcheck command, like --leak-check=full or --suppressions=/path/to/file.
After reading ctest's documentation I've tried using the -D option with CTEST_MEMCHECK_COMMAND_OPTIONS and MEMCHECK_COMMAND_OPTIONS. I also tried setting these as environment variables. None of my attempts produced any different test command. It's always:
Memory check command: /path/to/valgrind "--log-file=/path/to/build/Testing/Temporary/MemoryChecker.7.log" "-q" "--tool=memcheck" "--leak-check=yes" "--show-reachable=yes" "--num-callers=50"
How can I control the memcheck command from the ctest command line?
TL;DR
ctest --overwrite MemoryCheckCommandOptions="--leak-check=full --error-exitcode=100" \
--overwrite MemoryCheckSuppressionFile=/path/to/valgrind.suppressions \
-T memcheck
Explanation
I finally found the right way to override such variables, but unfortunately it's not easy to understand this from the documentation.
So, to help the next poor soul that needs to deal with this, here is my understanding of the various ways to set options for memcheck.
In a CTestConfig.cmake in you top-level source dir, or in a CMakeLists.txt (before calling include(CTest)), you can set MEMORYCHECK_COMMAND_OPTIONS or MEMORYCHECK_SUPPRESSIONS_FILE.
When you include(CTest), CMake will generate a DartConfiguration.tcl in your build directory and setting the aforementioned variables will populate MemoryCheckCommandOptions and MemoryCheckSuppressionFile respectively in this file.
This is the file that ctest parses in your build directory to populate its internal variables for running the memcheck step.
So, if you'd like to set you project's options for memcheck during cmake configuration, this is the way to got.
If instead you'd like to modify these options after you already have a properly configured build directory, you can:
Modify the DartConfiguration.tcl directly, but note that this will be overwritten if cmake runs again, since this file is regenerated each time cmake runs.
Use the ctest --overwrite command-line option to set these memcheck options just for that run.
Notes
I've seen mentions online of a CMAKE_MEMORYCHECK_COMMAND_OPTIONS variable. I have no idea what this variable is and I don't think cmake is aware of it in any way.
Setting CTEST_MEMORYCHECK_COMMAND_OPTIONS (the variable that is actually documented in the cmake docs) in your CTestConfig.cmake or CMakeLists.txt has no effect. It seems this variable only works in "CTest Client Scripts", which I have never used.
Unfortunately, both MEMORYCHECK_COMMAND_OPTIONS and MEMORYCHECK_SUPPRESSIONS_FILE aren't documented explicitly in cmake, only indirectly, in ctest documentation and the Testing With CTest tutorial.
When ctest is run in the build, it parses the file to populate its internal variables:
https://cmake.org/cmake/help/latest/manual/ctest.1.html#dashboard-client-via-ctest-command-line
It's not clear to me how this interacts with
Related
Every time we use core.exportVariable which, as far as I know, is the canonical way to export a variable in #action/core and, consequently, in github-script, you get an error such as this one:
Warning: The set-env command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
That link leads to an explanation of environment files, which, well, are files. Problem is files do not seem to have such a great support in github-script. There's the #actions/io package, but there's no way to create a file with that.
So is there something I'm missing, or there is effectively no way to create an environment file form inside a github-script step?
You no longer need the actions/github-script nor any other special API to export an environment variable. According to the Environment Files documentation, you can simply write to the $GITHUB_ENV file directly from the workflow step like this:
steps:
- name: Set environment variable
run: echo "{name}={value}" >> $GITHUB_ENV
The step will expose given environment variable to subsequent steps in the currently executing workflow job.
Just when I think I know how the shell works fairly, something comes along and stumps me. The following commands were executed on GNU bash, version 3.2.25.
I have several ./configure scripts that all share a group of common configure options, one of them being CFLAGS.
To that end, I have two variables
CFLAGS="-fPIC -O3"
COMMON_CONFIGURE_OPTIONS="CFLAGS=\"$CFLAGS\" --enable-static --disable-shared --prefix=$PREFIX"
When this gets passed to `./configure', it is done so like,
"$FOO/configure" $COMMON_CONFIGURE_OPTIONS
For the life of me, I cannot seem to get this to expand correctly. I have tried manually substituting the value of $CFLAGS into $COMMON_CONFIGURE_OPTIONS. I have tried every combination of single and double quotes under the sun. I have even tried quoting the entire "CFLAGS=..." argument.
The version I gave above yields the following (when set -x is enabled)
../configure 'CFLAGS="-fPIC' '-O3"' --enable-static --disable-shared --prefix=../install
configure: error: unrecognized option: `-O3"'
Try `../configure --help' for more information
What I expected, and what I desire, is for configure to be invoked like
./configure CFLAGS="-fPIC -O3" --enable-static --disable-shared --prefix="$PREFIX"
How can I achieve what I want, and additionally, are there good resources/tips on how to avoid this problem in the future?
To achieve what you want, I think you want to fundamentally change your approach. Assuming your configure scripts are generated by autoconf, I would suggest using a config.site file. That is, simply do something like:
mkdir -p $PREFIX/share
echo 'CFLAGS="--enable-static --disable-shared"' > $PREFIX/share/config.site
And then invoke configure as:
/path/to/configure --prefix=$PREFIX
Make sure that CONFIG_SITE is not set in the environment when you invoke configure, else the defaults will come from the file named there.
I am not understanding how to properly run a simple test(feature file and python file)
with the library pytest-bdd.
From the official documentation, I can't understand what command to issue to run a test.
I tried using pytest command, but I saw the NO test ran.
Do I need to use another library behave to run a feature file?
I figured out trying for 2 days,that ,
for running a pytest-bdd test, there are certain requirements, at least in my view.
put both the feature file and python file in the same directory (maybe this can be changed with configuration files)
the python file name needs to start with test_
the python file needs to contain a method of which name will start with test_
the method starting with test_ , need to be assigned to the #scenario sentence
to run the test, issue pytest command in the same directory(maybe it is also configurable)
After issuing you will only see the method with the name starting with test_ has passed, but all the tests actually ran. To test, you can assert False in any #when or #then annotated method, it will throw errors.
The system contained : pytest-bdd==3.0.2 (copied from pip freeze output)
Features files and python files can be placed in different folders using the bdd_features_base_dir hook provided by pytest-bdd; I think it is better having features files in different folders too.
Here you can see a working example (a simple hello world BDD test):
https://github.com/davidemoro/pytest-play-docker/tree/master/tests
https://github.com/davidemoro/pytest-play-docker/blob/master/tests/pytest.ini (see bdd_features_base_dir in [pytest] section)
https://github.com/davidemoro/pytest-play-docker/tree/master/tests/bdd
If you want to try out pytest-bdd without installation you can use Docker. Create a folder with inside your pytest BDD files and if you want a separate features folder targeted in bdd_features_base_dir and run:
docker run --rm -it -v $(pwd):/src davidemoro/pytest-play:latest
I've found out, that in the python file you don't have to put:
the method starting with test_ , need to be assigned to the #scenario sentence
You can just add: scenarios("") - to allow the tests to be started, which are using steps defined in this specific python file.
Remember to import scenarios!: from pytest_bdd import scenarios
Example:
Code example
Command..
pytest -v path_to_test_file.py
Things to note here..
Check format of feature file as filename.feature
Always __init__ modules, otherwise test-runner will not find test files
Glue right step definitions to test function
Add feature in features module
If you are using python3 execute test with python3
So,
python3 -m pytest -v path_to_test_file.py
Documentation
https://pytest-bdd.readthedocs.io/en/stable/#
How do I specify the name for the executable using the command-line version of Installshield. I'm looking for the command line switch
I need to create a package based on the version I pass.
For ex : If I pass - 2.2.0,
SET RELEASE_VERSION="2.2.0"
ISCmdBld.exe -p "\Path\BuildProject.ism" -y %RELEASE_VERSION% -? MY_COOL_APP_%RELEASE_VERSION%.EXE
I need to know the switch (indicated as ? here) which will create MY_COOL_APP_2.2.0.exe after building and running the command line InstallShield build tool.
I tried using the values from the path variables at build time
ISCmdBld.exe -p "\Path\BuildProject.ism" -y %RELEASE_VERSION% -l MYPathVar="MY_COOL_APP_%RELEASE_VERSION%"
I have associated the value of the path variable for the single .exe file in the Project-->Settings-->Application tab but still the build gives me the default setup.exe
Much appreciate your inputs
There is no parameter for IsCmdBld.exe that directly changes the name of the resulting setup.exe file. For a couple predetermined names you could make multiple release configurations and select them (with -r, or product configurations via -a), but for your case that is unlikely to scale. Instead you should consider one of the following:
Use automation (perhaps invoke a .vbs script) to edit the release configuration, and then build the project
Build to a known name, and then rename the resulting file as the next step in your build script
I would like to put configuration (in this case, site name) into supervisor
environment variables, for expansion in program:x command arguments. Is this supported? The documentation's wording would seem to indicate yes.
The following syntax is not working for me on supervisor-3.0 (excerpt of config file):
[supervisord]
environment = SITE="mysite"
[program:service_name]
command=/path/to/myprog/myservice /data/myprog/%(ENV_SITE)s/%(ENV_SITE)s.db %(program_name)s_%(process_num)03d
process_name=%(program_name)s_%(process_num)03d
numprocs=5
numprocs_start=1
Raises the following error:
sudo supervisord -c supervisord.conf
Error: Format string
'/path/to/myprog/myservice /data/myprog/%(ENV_SITE)s/%(ENV_SITE)s.db %(program_name)s_%(process_num)03d'
for 'command' contains names which cannot be expanded
Reading the documentation, I expected environment variables to be available for
expansion in program:x command as %(ENV_VAR)s:
http://supervisord.org/configuration.html#program-x-section-values
command:
"String expressions are evaluated against a dictionary containing the keys
group_name, host_node_name, process_num, program_name, here (the directory of
the supervisord config file), and all supervisord's environment variables
prefixed with ENV_."
Introduced: 3.0
Related:
There are open pull requests to enable expansion in additional section values:
https://github.com/Supervisor/supervisor/issues?labels=expansions&page=1&state=open
A search of goole (or SO) returns no examples of attempts to use %(ENV_VAR)s
expansion in the command section value:
https://www.google.com/search?q=supervisord+environment+expansion+in+command
I agree supervisor is not clear about this ( to me at least ).
I've found the easiest solution to execute /bin/bash -c.
In your case it would be:
command=/bin/bash -c"/path/to/myprog/myservice /data/myprog/${SITE}/${SITE}.db ..."
What do you think?
I've found inspiration here: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
You are doing it right; however, the ENV defined in your supervisord section doesn't get made available to the processes for whatever reason during configuration loading. If you start supervisord like this:
SITE=mysite supervisord
It will run correctly and expand that variable. I don't know why supervisord has issues adding to the environment and making it available to the subprocesses' config expansion. I think the environment variable is available inside the subprocess, but not when expanding variables in the subprocess config declaration.