SimpleCov coverage generated using RubyMine with one tab - rubymine

MacOS Monterey 12.6,
RubyMine 2022.2.3,
simplecov (0.12.0),
test framework: rspec
Used simplecov configuration:
# spec_helper.rb
require 'simplecov'
SimpleCov.start 'rails'
Question:
Why generating report with RM puts all in one tab? IS there any specific configuration for RM needed? Didn't find it in documentation.
This is OK
When running tests using command:
$ rspec spec
$ open coverage/index.html
Then coverage report contains all these tabs:
All files, Controllers, Models, Mailers, Helpers, Jobs, Libraries,
Ungrouped
This is not OK
When running tests using command from RM editor and after use button "Generate coverage report" (Saves in "coverage" folder)
Then coverage report contains only 1 tab:
All files
In both cases coverage percentage is correct.

Related

Unable to run/debug robot tests in vscode - robocorp extensions installed

I have installed Robocorp Code as well as Robot Framework Language Server and have configured them. However, I am still having errors when trying to run the tests via the code lens options.
Repo - A webapi repo with a specific folder containing all tests. Lets call it regression.
RF - 4.1.3
Python - 3.8
This is what happens when I click on Run on the code lens for any of the tests -
`PS C:\git\xxxx\regression> C:; cd 'C:\git\xxxx\regression'; &
'C:\Users\xxxx\AppData\Local\Temp\rf-ls-run\run_env_00_smh5defr.bat'
'-u'
'c:\Users\xxxx.vscode\extensions\robocorp.robotframework-lsp-0.47.2\src\robotframework_debug_adapter\run_robot__main__.py'
'--port' '54331' '--no-debug' '--argumentfile'
'C:\git\xxxx\regression\args-local.txt' '--pythonpath'
'c:\git\xxxx\regression\common\lib' '--variable'
'EXECDIR:C:/git/xxxx/regression'
'--prerunmodifier=robotframework_debug_adapter.prerun_modifiers.FilteringTestsSuiteVisitor'
'c:\git\xxxx\regression\api\api_Test.robot'
[ ERROR ] Parsing'--pythonpath' failed: File or directory to execute does not exist.
However, the test starts if I remove the argumentfile parameter but it, of course, fails because its missing arguments from the file.
Do note that the folder specified in pythopath exists and has some python libraries needed for the tests.

On Visual Studio Code, how do I specify my pytest.ini file for test discovery

I use pytest for testing. My test files reside in a subdirectory tests and they are named Foo.py, Bar.py instead of test_Foo.py, TestFoo.py, etc. So, to make sure pytest find them, I have a pytest.ini file in the root dir of the project with the following contents:
[pytest]
python_files=tests/*py
How to I specify the path to the pytest.ini file in Visual Studio Code so that the vscode-python plugin can correctly/successfully discover my test files? No matter what I try, I get Test discovery failed, with no reasons given.
To set up VS Code to use a specific pytest.ini file, you need to do the following:
Open a directory in VS Code (ctrl+k > ctrl+o)
Select a Python interpreter (ctrl+shift+p > Python: Select Interpreter > Python interpreter)
Configure the testing framework you want to use, in this case PyTest (ctrl+shift+p > Python: Configure Tests > Pytest > {pytest rootdir}
Open the settings.json file generated inside the .vscode/ directory that was created in your working directory (the one you chose in step 1)
Add the following setting to the file (it may already exist if you specified a rootdir when configuring pytest):
"python.testing.pytestArgs": [
"-c",
"/path/to/your/pytest.ini"
],
That's it! VS Code should be using the pytest.ini file you specify in the last argument. You can specify any CLI options you want there.
Source
Pytest requires the test function names to start with test or ends with test.
The ini file instructs py.test to treat all *_test.py files as unit tests.

py.test gives Coverage.py warning: Module sample.py was never imported

I ran a sample code from this thread.
How to properly use coverage.py in Python?
However, when I executed this command py.test test.py --cov=sample.py
it gave me a warning, therefore, no report was created.
platform linux2 -- Python 2.7.12, pytest-3.2.3, py-1.4.34, pluggy-0.4.0
rootdir: /media/sf_Virtual_Drive/ASU/CSE565_testand
validation/Assignments/temp, inifile:
plugins: cov-2.5.1
collected 3 items
test.py ...Coverage.py warning: Module sample.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
Anyone has an idea why coverage.py does not work?
hence, if I run coverage run -m py.test test.pyseparately, it does not show any warning.
Short answer: you need to run with the module name, not the file name: pytest --cov sample test.py
Long answer:
One comment in the answer you linked (How to properly use coverage.py in Python?) explains that this doesn't seem to work if the file you are trying to get the coverage of is a module imported by the test. I was able to reproduce that:
./sample.py
def add(*args):
return sum(args)
./test.py
from sample import add
def test_add():
assert add(1, 2) == 3
And I get the same error:
$ pytest --cov sample.py test.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /path/to/directory, inifile:
plugins: cov-2.6.1
collected 1 item
test.py . [100%]Coverage.py warning: Module sample.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
/path/to/directory/.venv/lib/python3.7/site-packages/pytest_cov/plugin.py:229: PytestWarning: Failed to generate report: No data to report.
self.cov_controller.finish()
WARNING: Failed to generate report: No data to report.
However, when using the module name instead:
pytest --cov sample test.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /path/to/directory, inifile:
plugins: cov-2.6.1
collected 1 item
test.py . [100%]
---------- coverage: platform darwin, python 3.7.2-final-0 -----------
Name Stmts Miss Cover
-------------------------------
sample.py 2 0 100%
The pytest-cov documentation seems to indicate you can use a PATH, but it might not be working in all cases...
tl;dr
Use coverage to generate the statistics file .coverage and then create a report that scopes to your specific file only.
coverage run -m pytest .\test\test_named_prng.py
coverage html --include=named_prng.py
Situation
Let's suppose you have some python files in your package, and you also have test cases within a single test file (test/test_named_prng.py). You want to measure the code coverage of your test file on one specific file within your package (named_prng.py).
\namedPrng
│ examples.py
│ named_prng.py
│ README.md
│ timeit_meas.py
│ __init__.py
│
└───test
test_named_prng.py
__init__.py
Here namedPrng/__init__.py imports examples.py and named_prng.py, where the other init file is empty.
An example with files is available on my GitHub.
Problem
Your problem is that with pytest or with coverage you cannot scope the report to your specific file (named_prng.py), because every other file imported from your package is also included in the report.
root cause
If you have an __init__.py in the level where the module you want to import is located, then __init__.py may import more files than necessary as the __init__.py will be executed. There are options to tell pytest and coverage to restrict which modules you want to investigate, but if they involve further modules from your package, they will be analysed too.
symptom with pytest
The option --cov of the package pytest-cov, which is used if you issue pytest with the option --cov, doesn't work if the (sub)module you want to create the coverage test on was imported from __init__.py.
If you run pytest (from namedPrng) with
pytest .\test\test_named_prng.py --cov --cov-report=html
you will get a report every .py file except the timeit_meas.py, because it is never imported: nor the test, nor its init, nor the imported named_prng.py, nor its init.
If you run pytest with
pytest .\test\test_named_prng.py --cov=./ --cov-report=html
then you explicitly tell coverage (invoked with pytest) to include everything in your level, therefore every .py file will be included in the report.
You'd like to tell coverage to create the report only on the source code of named_prng.py, but if you specify your module to --cov with
pytest .\test\test_named_prng.py --cov=named_prng --cov-report=html
or with --cov=named_prng.py you will get a warning:
Coverage.py warning: Module named_prng.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
WARNING: Failed to generate report: No data to report.
symptom with coverage
One can run the coverage and report separately and hope that more detailed options can be passed to coverage.
By issuing
coverage run -m pytest .\test\test_named_prng.py
coverage html
you get the same report on the 5 .py files. If you try to tell coverage to use only named_prng.py by
coverage run --source=named_prng -m pytest .\test\test_named_prng.py
or with --source=named_prng.py, you will get a warning
Coverage.py warning: Module named_prng.py was never imported. (module-not-imported)
Coverage.py warning: No data was collected. (no-data-collected)
and no report will be created.
Solution
You need to use the --include switch for coverage which unfortunately cannot be passed to pytest in a CLI.
Use coverage CLI
You can restrict the scope of investigation during code coverage calculation time:
coverage run --include=named_prng.py -m pytest .\test\test_named_prng.py
coverage html
or at reporting time.
coverage run -m pytest .\test\test_named_prng.py
coverage html --include=named_prng.py
Use pytest + settings file
One can call pytest with detailed configuration via a config file. Where you issue pytest, set up a .coveragerc file with the content
[run]
include = named_prng.py
Check coverage's description on the possible options and patterns.
This can be solved by running coverage first on your test file then generate the report as follows:
coverage run test.py
coverage report -m

Gallio with NCover shows 0% code coverage in Sonar UI

I am using sonar-runner to run tests and code coverage over my C# code with the help of gallio plugin. The tests are running fine, but I am not able to see any code coverage on the sonar web UI.
My Sonar settings are as follows:
sonar-project.properties
mentioning only relevant bits
sonar.gallio.coverage.tool = NCover
sonar.NCover.installDirectory = C:/Program Files/NCover
sonar.donet.visualstudio.testProjectPattern = .Test
sonar.dotnet.buildConfigurations = "Release|x86"
Any idea what coule be missing??
sonar.projectKey=XXX:XXX
sonar.projectVersion=trunk
sonar.projectName=XXX
sources=.
sonar.language=cs
sonar.dotnet.visualstudio.solution.file=Project.sln
sonar.dotnet.excludeGeneratedCode=false
sonar.dotnet.4.0.sdk.directory=C:/WIndows/Microsoft.NET/Framework/v4.0.30319
sonar.dotnet.version=4.0
# Gallio
sonar.gallio.mode=
sonar.gallio.coverage.tool=NCover
sonar.gallio.runner=IsolatedAppDomain
sonar.NCover.installDirectory=c:/Program Files/NCover
sonar.gallio.installDirectory=C:/Program Files/Gallio
sonar.dotnet.test.assemblies=$(SolutionDir)/../**/bin/**/*.Tests.Unit.dll
# FXCop
sonar.fxcop.mode=
#StyleCop
sonar.stylecop.mode=
#NDeps
sonar.ndeps.mode=skip
sonar-runner.properties
You said
sonar.dotnet.buildConfigurations = "Release|x86"
If that's true, your build likely isn't generating .pdb files, which are needed to figure out the mapping between the binaries and your source files.
Does it work if you try it with a Debug build?
I was seeing this same behavior with NCover in Sonar. I found that Sonar was generating invalid arguments for Gallio's NCover runner.
Try piping the output from Sonar's runner into a text file so that you can examine the arguments more easily (on the command line, you can just type sonar-runner > output.txt to do this).
You will likely see a line like this in your output:
INFO .u.c.CommandExecutor - Executing command: C:\Program Files\Gallio\bin\Gallio.Echo.exe /r:Local /report-directory:E:\Reports\.sonar /report-name-format:gallio-report /report-type:Xml E:\Projects\UnitTests\bin\Release\UnitTests.dll /runner-property:NCoverCoverageFile=E:\Reports\.sonar\coverage-report.xml /runner-property:NCoverArguments=//ias MyFirstAssembly;MySecondtAssembly;MyThirdAssembly
If you attempt to execute this manually via Gallio on the command line, you will get an error:
Cannot find file 'MyFirstAssembly;MySecondtAssembly;MyThirdAssembly'
If you edit this list manually down to a single entry such as MyFirstAssembly*, everything will work as expected.
This seems to indicate that Sonar is generating invalid command line arguments for Gallio. As much as I love NCover, the easiest solution was to use OpenCover instead.

CoffeeScript build setup that supports unit testing?

I want to use CoffeeScript for building what will essentially be a JavaScript library.
I would just like to be able to
define some classes, with inheritance
keep my code in several files
write some unit tests (QUnit or whatever works, preferably writing tests in CoffeeScript)
(ideally) have the project watched and built automatically while I work
This seems reasonable, no? My plan is just having the unit tests run against the compiled JavaScript, in a browser, although if I can run them straight in node.js that's even better.
Currently I'm trying to do this with CoffeeToaster and QUnit, using two different CoffeeToaster configurations, one with tests and one without. It is working, but perhaps somebody has a better suggestion? Should I ditch CoffeeToaster and do it with Cake? Or get another unit testing framework? Can anybody point me to a tutorial for this? I'm making a clientside JS lib, so I don't want to involve Rails etc.
I'm currently using:
Mocha as the test runner and should.js for assertions;
Mockery to intercept certain require calls for isolated testing with mocks/stubs of required libraries;
*JSCoverage for instrumenting the code for code coverage reports.
My code lives in src/ and I write my tests in CoffeeScript. I use make to build and test the code.
make build compiles the CoffeeScript in src/ to JavaScript in lib/.
make test builds the code and then runs the tests in test/.
make monitor watches and runs the tests as soon as they change. Unfortunately it doesn't recompile the code. I use a Vim keybinding to call make, which also triggers Mocha to re-run the tests.
Edit: If this bothers you, you could run coffee --watch -o lib/ -c src/.
make coverage generates a code coverage report and puts it in lib-cov/report.html.
My Makefile looks somewhat like this:
COFFEE = ./node_modules/.bin/coffee --compile
MOCHA = NODE_ENV=test ./node_modules/.bin/mocha
MOCHA_OPTS = \
--compilers coffee:coffee-script \
--require should \
--colors
REPORTER = spec
build:
#$(COFFEE) --output lib/ src/
test: build
#$(MOCHA) --reporter $(REPORTER) $(MOCHA_OPTS)
monitor:
#$(MOCHA) --reporter min $(MOCHA_OPTS) \
--watch --growl
coverage: instrument
#MYLIB_COV=1 $(MOCHA) $(MOCHA_OPTS) \
--reporter html-cov > lib-cov/report.html
instrument: build
#rm -rf ./lib-cov
#jscoverage ./lib ./lib-cov
.PHONY: build test monitor coverage instrument
You could probably use the above with very little modification.
To generate the coverage report with make coverage, the tests must be run against the instrumented code in lib-cov/ instead of the code in lib/. To make this possible, three things are needed:
The Makefile should set an environment variable, like MYLIB_COV (change the name as you like).
Your index.js should look at this environment variable and require either lib/ or lib-cov/ accordingly:
// index.js
module.exports = process.env.MYLIB_COV
? require('./lib-cov/mylib')
: require('./lib/mylib');
If you need exports from multiple source files, you can combine them here. If you have something other than index.js as 'main' in your package.json, don't forget to change it.
Your tests should require '../':
# test/test.user.coffee
describe 'User', ->
User = {}
before ->
{User} = require '../'
describe '#equals()', ->
describe 'when users have the same username and host', ->
it 'should return true', ->
user1 = new User 'user', 'some.host.foo'
user2 = new User 'user', 'some.host.foo'
user1.equals(user2).should.be.true
# etc.
I'll leave it as an exercise to the reader to find out whether they need Mockery and how to use it if they do. I will point out, though, that the require call in the test snippet above is done inside before for a reason.
Happy coding!