CoffeeScript build setup that supports unit testing? - coffeescript

I want to use CoffeeScript for building what will essentially be a JavaScript library.
I would just like to be able to
define some classes, with inheritance
keep my code in several files
write some unit tests (QUnit or whatever works, preferably writing tests in CoffeeScript)
(ideally) have the project watched and built automatically while I work
This seems reasonable, no? My plan is just having the unit tests run against the compiled JavaScript, in a browser, although if I can run them straight in node.js that's even better.
Currently I'm trying to do this with CoffeeToaster and QUnit, using two different CoffeeToaster configurations, one with tests and one without. It is working, but perhaps somebody has a better suggestion? Should I ditch CoffeeToaster and do it with Cake? Or get another unit testing framework? Can anybody point me to a tutorial for this? I'm making a clientside JS lib, so I don't want to involve Rails etc.

I'm currently using:
Mocha as the test runner and should.js for assertions;
Mockery to intercept certain require calls for isolated testing with mocks/stubs of required libraries;
*JSCoverage for instrumenting the code for code coverage reports.
My code lives in src/ and I write my tests in CoffeeScript. I use make to build and test the code.
make build compiles the CoffeeScript in src/ to JavaScript in lib/.
make test builds the code and then runs the tests in test/.
make monitor watches and runs the tests as soon as they change. Unfortunately it doesn't recompile the code. I use a Vim keybinding to call make, which also triggers Mocha to re-run the tests.
Edit: If this bothers you, you could run coffee --watch -o lib/ -c src/.
make coverage generates a code coverage report and puts it in lib-cov/report.html.
My Makefile looks somewhat like this:
COFFEE = ./node_modules/.bin/coffee --compile
MOCHA = NODE_ENV=test ./node_modules/.bin/mocha
MOCHA_OPTS = \
--compilers coffee:coffee-script \
--require should \
--colors
REPORTER = spec
build:
#$(COFFEE) --output lib/ src/
test: build
#$(MOCHA) --reporter $(REPORTER) $(MOCHA_OPTS)
monitor:
#$(MOCHA) --reporter min $(MOCHA_OPTS) \
--watch --growl
coverage: instrument
#MYLIB_COV=1 $(MOCHA) $(MOCHA_OPTS) \
--reporter html-cov > lib-cov/report.html
instrument: build
#rm -rf ./lib-cov
#jscoverage ./lib ./lib-cov
.PHONY: build test monitor coverage instrument
You could probably use the above with very little modification.
To generate the coverage report with make coverage, the tests must be run against the instrumented code in lib-cov/ instead of the code in lib/. To make this possible, three things are needed:
The Makefile should set an environment variable, like MYLIB_COV (change the name as you like).
Your index.js should look at this environment variable and require either lib/ or lib-cov/ accordingly:
// index.js
module.exports = process.env.MYLIB_COV
? require('./lib-cov/mylib')
: require('./lib/mylib');
If you need exports from multiple source files, you can combine them here. If you have something other than index.js as 'main' in your package.json, don't forget to change it.
Your tests should require '../':
# test/test.user.coffee
describe 'User', ->
User = {}
before ->
{User} = require '../'
describe '#equals()', ->
describe 'when users have the same username and host', ->
it 'should return true', ->
user1 = new User 'user', 'some.host.foo'
user2 = new User 'user', 'some.host.foo'
user1.equals(user2).should.be.true
# etc.
I'll leave it as an exercise to the reader to find out whether they need Mockery and how to use it if they do. I will point out, though, that the require call in the test snippet above is done inside before for a reason.
Happy coding!

Related

How to debug unit test while developping a package in Julia

Say I develop a package with a limited set of dependencies (for example, LinearAlgebra).
In the Unit testing part, I might need additional dependencies (for instance, CSV to load a file). I can configure that in the Project.toml all good.
Now from there and in VS Code, how can I debug the Unit tests? I tried running the "runtests.jl" in the debugger; however, it unsurprisingly complains that the CSV package is unavailable.
I could add the CSV package (as a temporary solution), but I would prefer that the debugger run with the configuration for the unit testing; how can I achieve that?
As requested, here is how it can be reproduced (it is not quite minimal, but instead I used a commonly used package as it give confidence the package is not the problem). We will use DataFrames and try to execute the debugger for its unit tests.
Make a local version of DataFrames for the purpose of developing a feature in it. I execute dev DataFrames in a new REPL.
Select the correct environment (in .julia/dev/DataFrames) through the VS-code user interface.
Execute the "proper" unit testing by executing test DataFrames at the pkg prompt. Everything should go smoothly.
Try to execute the tests directly (open the runtests.jl and use the "Run" button in vs-code). I see some errors of the type:
LoadError: ArgumentError: Package CategoricalArrays not found in current path:
- Run `import Pkg; Pkg.add("CategoricalArrays")` to install the CategoricalArrays package.
which is consistent with CategoricalArrays being present in the [extras] section of the Project.toml but not present in the [deps].
Finally, instead of the "Run" command, execute the "Run and Debug". I encounter similar errors here is the first one:
Test Summary: | Pass Total
merge | 19 19
PASSED: index.jl
FAILED: dataframe.jl
LoadError: ArgumentError: Package DataStructures not found in current path:
- Run `import Pkg; Pkg.add("DataStructures")` to install the DataStructures package.
So I can't debug the code after the part requiring the extras packages.
After all that I delete this package with the command free DataFrames at the pkg prompt.
I see the same behavior in my package.
I'm not certain I understand your question, but I think you might be looking for the TestEnv package. It allows you to activate a temporary environment containing the [extras] dependencies. The discourse announcement contains a good description of the use cases.
Your runtest.jl file should contain all necessary imports to run tests.
Hence you are expected to have in your runtests.jl file lines such as:
using YourPackageName
using CSV
# the lines with tests now go here.
This is a standard in Julia package layout. For an example have a look at any mature Julia such as DataFrames.jl (https://github.com/JuliaData/DataFrames.jl/blob/main/test/runtests.jl).

my coffeescript file compiles but mocha gives an error

I have a project that uses "coffee-script": "^1.7.1" in its package.json.
The code has this line in it:
[{id: id, name: name}, ...] = result.rows
This compiles fine using coffeescript version 1.7.1
The problem is that I am trying to use mocha for unit tests and it gives me an error on this line:
Parse error on line xyz: Unexpected '...'
Apparently mocha uses an older coffeescript. Is there a way to make it work without adjusting the source for mocha?
EDIT:
my Gruntfile.coffee:
'use strict'
module.exports = ->
#initConfig
cafemocha:
src: ['test/*.coffee']
options:
reporter: 'spec'
ui: 'bdd'
coffee:
compile:
files:
'lib/mylib.js': ['src/*.coffee']
#loadNpmTasks 'grunt-cafe-mocha'
#loadNpmTasks 'grunt-contrib-coffee'
#registerTask 'default', ['coffee', 'cafemocha']
I added mocha.opts to the test directory:
--require coffee-script/register
--compilers coffee:coffee-script/register
--reporter spec
--ui bdd
but, still, when I run grunt, it gives me the same error. I am new to this environment, and I find it too complicated, please help.
Starting from version 1.7.x CoffeeScript compiler should be explicitly registered (see change log for version 1.7.0).
So, the problem is that CoffeeScript compiler is not registered when you're running your mocha tests, so node.js treats all your .coffee files as .js files.
The best possible solution is to specify --compilers option for your mocha tests:
--compilers coffee:coffee-script/register
If you don't want to include it to every mocha call, you could set it up using mocha.opts file.
Here are some useful links:
issue about it on github
reference in mocha docs
the reason behind this breaking change in CoffeeScript engine
Update
Looks like your issue is much deeper then I thought.
First, grunt-cafe-mocha doesn't respect mocha.opts because it's running tests by requireing mocha as a dependency, instead of calling mocha test runner.
So, it would've been enough to add require('coffee-script/register') to the top of your gruntfile, if not for this old grunt issue.
In short, grunt uses coffee-script 1.3.x, forcing all its tasks to use the same version of coffee. I had the same problem with grunt-contrib-connect, being unable to use latest coffee-script in my express app.
So, the only help I can offer you is a small grunt task I wrote to solve similar problem in one of my projects. It runs mocha in a separate child process, thus completely isolating it from grunt.
N.B. I had a thought about releasing this task to npm, but considered it too minor.

Test coverage with Karma, browserify and Coffeescript

I'm having troubles to add test code coverage, I'm using Karma and files added to Karma are already bundled with browserify, so in karma.conf.coffee it looks like that:
files: [
{ pattern:'bin/public/client/app.js', served:yes: included:yes }
{ pattern:'src/lib/vendor/angular-mocks/angular-mocks.js', served:yes: included:yes }
{ pattern:'bin/tests.js', served:yes: included:no }
]
And that works for running the test, but not coverage
I'm using karma-coverage npm package, and this:
preprocessors: 'bin/public/client/app.js':['coverage']
reporters: ['progress','coverage']
Actually does create coverage stat files, but those are completely wrong, because it reddens parts that browserify brought from node_modules (because I don't have tests to cover those)
Ideally I have to gather source maps that browserify generates, and run coverage against those, but browserify embeds source maps into .js files. Using karma-sourcemap-loader lets me see original coffeescript files of tests, when debugging (for some reason it works only in ChromeCanary, nevertheless it works)
I tried to do preprocessors: 'src/client/**/*.coffee':['coverage'], but that yields no stats at all saying 'No data to display'
Do you have any ideas?
upd:
I've figured by running browserify-istanbul transform right after coffeeify and that gave me nice diagram like this:
Now, I need somehow to remove app.js from it, because really it doesn't matter and really confuses
upd:
Oh, instead of javascript I have to supply coffee files:
preprocessors : {
'bin/tests.js': ['sourcemap']
'src/client/**/*.coffee': ['coverage']
}
Seems I answered my own question. Also it seems there's a bug in current version of karma-coverage - it throws an error when coverageReporter.type is html (which is by default html). I'm glad I've figured it out. It's always nice to see how much code covered by tests

how to pass compiler options to mocha

I run a mocha command to run my tests
$ ./node_modules/.bin/mocha --compilers coffee:coffee-script -R spec
I wish to pass additional options to the coffee-script compiler (--bare to avoid the outer closure that is introduced when compiling .coffee to .js). Is there a way to do this? I tried
$ ./node_modules/.bin/mocha --compilers coffee:coffee-script --bare -R spec
but that doesn't look right. It also failed saying that --bare is not a valid option for mocha.
error: unknown option `--bare'
The --compiler option doesn't support this, but you can write a script which activates the compiler with your options, then use mocha's --require option to activate your registration script. For example, create a file at the root of the project called babelhook.js:
// This file is required in mocha.opts
// The only purpose of this file is to ensure
// the babel transpiler is activated prior to any
// test code, and using the same babel options
require("babel-register")({
experimental: true
});
Then add this to mocha.opts:
--require babelhook
And that's it. Mocha will require babelhook.js before any tests.
Simply add a .babelrc file to your root.
Then any instances of babel (build, runtime, testing, etc) will reference that.
https://babeljs.io/docs/usage/babelrc/
You can even add specific config options per-environment.
In case anyone stumbles upon this. The 'experimental' option in babel has been deprecated. Your 'babelhook.js' should now read:
// This file is required in mocha.opts
// The only purpose of this file is to ensure
// the babel transpiler is activated prior to any
// test code, and using the same babel options
require("babel/register")({
stage: 1
});

How to I configure tox so it will run pytest coverage on a single environment instead of all?

I do have a complex tox.ini configuration with multiple environments for different versions of python.
I would like to know how to tell tox to run coverage only on the default python interpretor.
One of the problems is that the default python environment can be different from one platform to another.
I do have a wrapper script which calls tox -e py25,py26,docs where the -e arguments are the detected versions of python.
[tox]
...
[testenv:docs]
...
[testenv]
commands=py.test --cov-report xml --cov scripts
...
[testenv:py26]
...
[testenv:py25]
...
Desired behaviour: run pytest with coverage for a single environment (this is supposed to run integrated with jenkins).
I think you could use and include the [testenv:py] environment which uses the python interpreter with which tox is invoked itself. If you define the coverage-run there you should get what you want.