I'm trying to run some database queries in a Mocha test but I'm running into some problems.
Here's the test (using Mongoose):
it.only "should create some objects", (done) ->
await models.MyModel1.count defer(err, oldModel1Count)
await models.MyModel2.count defer(err, oldModel2Count)
# ... do some stuff
await models.MyModel1.count defer(err, newModel1Count)
await models.MyModel2.count defer(err, newModel2Count)
assert.equal oldModel1Count + 1, newModel1Count
assert.equal oldModel2Count + 1, newModel2Count
The command for running the tests:
mocha --compilers coffee:iced-coffee-script --require iced-coffee-script --require mocha --colors --recursive test"
The error happens on the first line:
ReferenceError: err is not defined
I can only assume that it is attempting to use normal CoffeeScript to execute this code, so it thinks that defer is a function and attempts to evaluate err.
Is it possible to write the Mocha tests in IcedCoffeeScript?
This works for me
mocha --require ./fix_my_iced_tests.js --compilers coffee:coffee-script
create fix_my_iced_tests.js
require('iced-coffee-script').register()
create test/some_test.coffee (this make sure that fix actually works)
assert = require 'assert'
describe 'test section', ()->
it 'is ok', (done)->
await setTimeout (defer next), 100
assert.strictEqual(1, 1)
done()
return
return
You should recieve something like this
Type subdsl
√ is ok (102ms)
1 passing (109ms)
without fix you should recieve something like this
ReferenceError: next is not defined
--- EDITED ---
Much better option found here
mocha --compilers coffee:iced-coffee-script/register
Not sure if that's still relevant, but it's 2015 now, and Node.js has first-class support for Promises and Generators, which enables you to write your code exactly as concise and elegant as IcedCoffeeScript, but with a lot less wrinkles.
Related
I have my test like this with a tag
main() {
test("testing wether test works", () async {
expect(true, true);
}, tags: 'testme');
}
...and I'm running it like so, specifying only the tagged tests
pub run test --tags "testme"
When I run it, all the tests run in my project, not just the tagged ones, is this the correct syntax and command to run?
Actually, it works just as in the question. My problem was that another error was occuring
TLDR: How can I get better output from pytest?
I'm using Django with regular python3 unittests.
I've just switched to pytest-django for running tests.
pytest throws an error for almost all my tests (149 in total).
Pages and pages with this error.
self = <RegexURLResolver 'project.urls' (None:None) ^/>
#property
def reverse_dict(self):
language_code = get_language()
if language_code not in self._reverse_dict:
self._populate()
> return self._reverse_dict[language_code]
E KeyError: 'en-us'
Which wasn't the problem. It led me down to a wrong path.
I had a syntax error in one of my views.py files.
./manage.py test resulted in:
snip
File "/home/roland/project/views.py", line 20
code = zip(list1, list2])
SyntaxError: invalid syntax
Notice the last: ] which was the problem.
So: How can I get more useful output on problems when using pytest?
Btw:
After finding this and scrolling back into the pytest output there was mention of the syntax error. It was just buried in the output.
You can use the --maxfail=1 option so it will stop immediately on first failure.
Also, make sure your pytest.ini is setup properly so that pytest knows it should be using django-pyest.
[pytest]
DJANGO_SETTINGS_MODULE='myapp.settings'
For my workflow, I usually do the following:
run pytest --maxfail=1 myfile.py &> pytest-output.txt
tail, grep, or search he text file for errors.
Fix and iterate
There are a lot of other configuration options that will help you to get more meaningful input from pytest.
I'm using grunt, karma (singleRun: false). My tests are written in coffeescript. Each time my coffee file changes I want my tests to run. The problem is that I don't know how to make both happens.
So far I discovered the watch task, I tried to add my coffee thing there and add the watcher to my test task like that:
//karma.conf.js
singleRun: true,
and in Gruntfile:
//Gruntfile.js
watch: {
coffee: {
files: ['test/spec/{,*/}*.coffee'],
tasks: 'coffee'
}
}
grunt.registerTask('test', [
'clean:server',
'coffee',
'concurrent:test',
'autoprefixer',
'connect:test',
'karma',
'watch:coffee'
]);
This way the karma watcher is watching javascript files, but my own coffee watcher is not triggered at all.
Right now I just removed watch:coffee from test task and I'm running grunt test and grunt watch:coffee in parallel terminals, which looks a bit pathetic. Is there a better way?
Much better approach is to use the karma-coffee-preprocessor. It's simple to set up and I can use singleRun: true.
I have karma config set up correctly, config file, running in the background, just great. As soon as I change and save a file, it reruns the tests.... all 750 of the unit tests. I want to be able to run just a few. Short of manually hacking the config file or commenting out hundreds of tests across many files, is there any easy way to do it?
E.g. when running command line server tests using say mocha, I just use regexp: mocha -g 'only tests that I want'. Makes it much easier to debug and quickly check.
So now I feel foolish. mocha supports a very narrow version of regexp matching.
This runs all tests
describe('all tests',function(){
describe('first tests',function(){
});
describe('second tests',function(){
});
});
This runs just 'first tests'
describe('all tests',function(){
describe.only('first tests',function(){
});
describe('second tests',function(){
});
});
You can also do it.only()
I should have noticed that. Sigh.
You can do that at karma startup time unfortunately, not at runtime.
If you want to change it dynamically you have to put some more effort.
Say you want to focus on a specific set/suite of tests from the beginning, on the karma-mocha plugin page there's this snippet of code to do what you want:
module.exports = function(config) {
config.set({
// karma configuration here
...
// this is a mocha configuration object
client: {
// The pattern string will be passed to mocha
args: ['--grep', '<pattern>'],
...
}
});
};
In order to make the <pattern> parametric you have to wrap the configuration file in a Configurator that will listen CLI and customize the karma configuration for you.
Have a look to this SO answer to know how to setup a very simple Configurator.
I have same question and this is my workround by a little change on karma.conf.js.
In fact, take an argument from command line and modify the pattern in "files".
I use minimist to parse the argument list.
In config file:
/* Begin */
var minimist = require('minimist');
var argv = minimist(process.argv);
var testBase="test/unit";
var testExt=".spec.js";
var unitTestPattern = testBase+'/**/*'+testExt;
if ("test" in argv){
unitTestPattern = testBase+"/"+argv["test"]+testExt;
}
/* End */
module.exports = function(config){
config.set({
//....
files : [
//....
unitTestPattern, //place here
// 'test/unit/**/*.spec.js', //replace this
//....
],
//....
});
};
run in command prompt:
karma start test/karma.conf.js --single-run --test #TEST_CASE_FILE#
a nice extension that can help here is karma-jasmine-html-reporter-livereload
https://www.npmjs.com/package/karma-jasmine-html-reporter-livereload
or karma-jasmine-html-reporter https://www.npmjs.com/package/karma-jasmine-html-reporter?__hstc=72727564.86845f057bb4d741f59d578059e30644.1443860954685.1453095135802.1453138187458.37&__hssc=72727564.1.1453138187458&__hsfp=2285154675
It creates a debug page in which you can run each test individually. very useful for large projects!
1) In your karma.conf.js get the params from the terminal:
var files = (process.env.npm_config_single_file) ? process.env.npm_config_single_file : 'test/test_index.js';
2) In order to run a single test you will need to set an option object with all your configuration (Without files and preprocessors):
var option = {
webpack: {
// webpack configuration
},
// more configuration......
};
3) Set your files path and preprocessors:
option.files = [
{pattern: files, watch: false}
];
option.preprocessors = {};
option.preprocessors[files] = [ 'webpack', 'sourcemap' ];
// call config.set function
config.set(option);
4) Run in the terminal:
npm test --single_file=**/my-specific-file-spec.js
For more information check this PR:
https://github.com/webpack/karma-webpack/pull/178
There are different ways to do it.
Use --grep option. The disadvantage of this is that all the tests are preprocessed before running the specific test suite.
Use .only method. Disadvantage same as no. 1. Using both 1 and 2 method my node process used to crash often saying out of memory.
Limit the files options for processing. This is super fast.
Limit preprocessing to certain folder like Unit or Integration folder.
For this I have used custom cli option --only and in the karma config
const modules = config.only;
and in the the files pattern
files: typeof modules === 'string ? '[`tests/**/${module}/**/*.(test|spec).js`]: 'tests/**/*.(test|spec).js'
Advantage: Developers can run only certain tests when they make a small change way faster by limiting in the preprocessing phase.
You can also use combination of no.3 and no.1 or 2.
I always thought that imperative and declarative usage of xfail/skip in py.test should work in the same way. In the meantime I've noticed that if I write a test that contains an imperative skip the result of the test will always be "xfail" even it the test passes.
Here's some code:
import pytest
def test_should_fail():
pytest.xfail("reason")
#pytest.mark.xfail(reason="reason")
def test_should_fail_2():
assert 1
Running these tests will always result in:
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5 -- C:\Python27\python.exe
collecting ... collected 2 items
test_xfail.py:3: test_should_fail xfail
test_xfail.py:6: test_should_fail_2 XPASS
===================== 1 xfailed, 1 xpassed in 0.02 seconds =====================
If I understand correctly what is written in the user manual, both test should be "XPASS'ed".
Is this a bug in py.test or am I getting something wrong?
When using the pytest.xfail() helper function you are effectively raising an exception in the test function. Only when you are using the marker it is possible for py.test to execute the test fully and give you an XPASS.