Pytest pluggy._manager.PluginValidationError due to the pytest-json-report plugin - pytest

I use pytest together with the pytest-json-report plugin. I have the pytest_json_modifyreport hook in the conftest.py file.
When I run the command pytest --json-report, it is OK. But when I run the simple pytest command, it yields the following: pluggy._manager.PluginValidationError: unknown hook 'pytest_json_modifyreport' in plugin.
Is it possible to get rid of that error without commenting out the hook?

According to the official documentation:
A note on hooks
If you're using a pytest_json_* hook although the plugin is not installed or
not active (not using --json-report), pytest doesn't recognize it and may fail with an internal error like this:
INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
You can avoid this by declaring the hook implementation optional:
import pytest
#pytest.hookimpl(optionalhook=True)
def pytest_json_runtest_metadata(item, call):
...

Related

pytest.ini doesn't take effect when calling pytest vs. pytest <test_name>

I am working creating some testing infrastructure and struggling with taking care of all the dependencies correctly.
The directory structure I have looks like:
conftest.py
kernels/
|-kernel_1/
|---<kernel_src1>
|---__init__.py
|---options.json
|---test/
|-----test_func1.py
|-kernel_2/
|---<kernel_src2>
|---__init__.py
|---pytest.ini
|---options.json
|---scripts/
|-----__init__.py
|-----some_module.py
|---test/
|-----test_func2.py
When I call pytest on any of these tests, the test first compiles and simulates the kernel source code (C++) and compares the output against golden that is generated in python. Since all the kernels will be compiled individually, I create an output directory to store compile/simulation logs along with some header files that we generated in the kernel_1 directory.
For example, pytest kernel_1/test/test_func1.py will create a directory in kernel_1/build_test_func1/<compile/sim logs>.
I use the conftest.py which updates cwd to the test directory based on the accepted answer here:
Change pytest working directory to test case directory
I also added pytest.ini to add kernel_2 to the pythonpath when running test_func2 so we can find modules in scripts folder:
[pytest]
pythonpath=.
Tests run correctly when calling it from:
cd kernel_2/; pytest
cd kernel_2/test; pytest
cd kernel_2; pytest test/test_func1.py
cd kernel_2/test; pytest test_func1.py
The test also runs correctly when calling it like this: pytest kernel_2/test/test_func2.py
But I start seeing ModuleImportError when calling it from top-level without specifying the test
pytest
ImportError while importing test module '<FULL_PATH>/kernel_2/test/test_func2.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
<FULL_PATH>miniconda3/envs/pytest/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
kernel_2/test/test_func2.py:8: in <module>
from scripts.some_module import some_func
E ModuleNotFoundError: No module named 'scripts'
The issue looks when collecting pytest.ini in a specific kernel doesn't take effect when calling pytest, but I haven't been able to find a way to fix this issue. Any comments, concerns are appreciated!

How to debug unit test while developping a package in Julia

Say I develop a package with a limited set of dependencies (for example, LinearAlgebra).
In the Unit testing part, I might need additional dependencies (for instance, CSV to load a file). I can configure that in the Project.toml all good.
Now from there and in VS Code, how can I debug the Unit tests? I tried running the "runtests.jl" in the debugger; however, it unsurprisingly complains that the CSV package is unavailable.
I could add the CSV package (as a temporary solution), but I would prefer that the debugger run with the configuration for the unit testing; how can I achieve that?
As requested, here is how it can be reproduced (it is not quite minimal, but instead I used a commonly used package as it give confidence the package is not the problem). We will use DataFrames and try to execute the debugger for its unit tests.
Make a local version of DataFrames for the purpose of developing a feature in it. I execute dev DataFrames in a new REPL.
Select the correct environment (in .julia/dev/DataFrames) through the VS-code user interface.
Execute the "proper" unit testing by executing test DataFrames at the pkg prompt. Everything should go smoothly.
Try to execute the tests directly (open the runtests.jl and use the "Run" button in vs-code). I see some errors of the type:
LoadError: ArgumentError: Package CategoricalArrays not found in current path:
- Run `import Pkg; Pkg.add("CategoricalArrays")` to install the CategoricalArrays package.
which is consistent with CategoricalArrays being present in the [extras] section of the Project.toml but not present in the [deps].
Finally, instead of the "Run" command, execute the "Run and Debug". I encounter similar errors here is the first one:
Test Summary: | Pass Total
merge | 19 19
PASSED: index.jl
FAILED: dataframe.jl
LoadError: ArgumentError: Package DataStructures not found in current path:
- Run `import Pkg; Pkg.add("DataStructures")` to install the DataStructures package.
So I can't debug the code after the part requiring the extras packages.
After all that I delete this package with the command free DataFrames at the pkg prompt.
I see the same behavior in my package.
I'm not certain I understand your question, but I think you might be looking for the TestEnv package. It allows you to activate a temporary environment containing the [extras] dependencies. The discourse announcement contains a good description of the use cases.
Your runtest.jl file should contain all necessary imports to run tests.
Hence you are expected to have in your runtests.jl file lines such as:
using YourPackageName
using CSV
# the lines with tests now go here.
This is a standard in Julia package layout. For an example have a look at any mature Julia such as DataFrames.jl (https://github.com/JuliaData/DataFrames.jl/blob/main/test/runtests.jl).

how to debug unittest in vscode using pytest

I am trying to use vscode to debug some unit tests using pytest. One special thing is I am importing the module I need to test from another folder.
src
func.py
test
test_func.py
so inside test_func.py, I imported the function I need to test:
import func
def test_func():
...
But when I tried to debug it using vscode, it told me "module func is not found!"
Should I do some extra configuration to make it work?

How to watch mocha-webpack tests results in mocha-test-explorer VS Code Extension?

I'm working on a just created Vue cli project with TypeScript and unit testing.
I Know that Vue cli tests uses mocha-webpack to runt tests. And when I Type mocha-webpack on terminal, the tests runs ok.
But I would like to use an mocha-test-explorer, a VS Code extension for mocha testing. For a better programming experience.
I Installed the extension and configured like this:
"mochaExplorer.files": "tests/**/*.ts",
"mochaExplorer.require": "ts-node/register"
But I'm still not able to see the tests. test-explorer shows the fallowing output:
(function (exports, require, module, __filename, __dirname) { import { expect } from 'chai'
^^^^^^
SyntaxError: Unexpected token import
I know that is because mocha is not loading files through webpack.
What should I do to see mocha-webpack tests in the mocha-test-explorer extension (or any other gui extension) for VSCode?

my coffeescript file compiles but mocha gives an error

I have a project that uses "coffee-script": "^1.7.1" in its package.json.
The code has this line in it:
[{id: id, name: name}, ...] = result.rows
This compiles fine using coffeescript version 1.7.1
The problem is that I am trying to use mocha for unit tests and it gives me an error on this line:
Parse error on line xyz: Unexpected '...'
Apparently mocha uses an older coffeescript. Is there a way to make it work without adjusting the source for mocha?
EDIT:
my Gruntfile.coffee:
'use strict'
module.exports = ->
#initConfig
cafemocha:
src: ['test/*.coffee']
options:
reporter: 'spec'
ui: 'bdd'
coffee:
compile:
files:
'lib/mylib.js': ['src/*.coffee']
#loadNpmTasks 'grunt-cafe-mocha'
#loadNpmTasks 'grunt-contrib-coffee'
#registerTask 'default', ['coffee', 'cafemocha']
I added mocha.opts to the test directory:
--require coffee-script/register
--compilers coffee:coffee-script/register
--reporter spec
--ui bdd
but, still, when I run grunt, it gives me the same error. I am new to this environment, and I find it too complicated, please help.
Starting from version 1.7.x CoffeeScript compiler should be explicitly registered (see change log for version 1.7.0).
So, the problem is that CoffeeScript compiler is not registered when you're running your mocha tests, so node.js treats all your .coffee files as .js files.
The best possible solution is to specify --compilers option for your mocha tests:
--compilers coffee:coffee-script/register
If you don't want to include it to every mocha call, you could set it up using mocha.opts file.
Here are some useful links:
issue about it on github
reference in mocha docs
the reason behind this breaking change in CoffeeScript engine
Update
Looks like your issue is much deeper then I thought.
First, grunt-cafe-mocha doesn't respect mocha.opts because it's running tests by requireing mocha as a dependency, instead of calling mocha test runner.
So, it would've been enough to add require('coffee-script/register') to the top of your gruntfile, if not for this old grunt issue.
In short, grunt uses coffee-script 1.3.x, forcing all its tasks to use the same version of coffee. I had the same problem with grunt-contrib-connect, being unable to use latest coffee-script in my express app.
So, the only help I can offer you is a small grunt task I wrote to solve similar problem in one of my projects. It runs mocha in a separate child process, thus completely isolating it from grunt.
N.B. I had a thought about releasing this task to npm, but considered it too minor.