I'm very green to Ember testing, but have found a lot of useful documentation for it online so far (thank you people!). One of the issues I'm hitting here, though, is I cannot get a test to fail. Strange, I know. For example, I have the following:
import {
module,
test
} from 'qunit';
module("Example tests");
test("This is an example test", function(assert) {
assert.equal(1, 1, "Ember knows 1 is equal to 1");
});
test("This is another example test", function(assert) {
assert.notEqual(1, 2, "Ember knows 1 is not equal to 2");
});
test("This is a 3rd example test", function(assert) {
assert.equal(1, 2, "Luke, you're an idiot");
});
However, if I run the ember-cli command: ember test
It says everything passes..
$ ember test
Future versions of Ember CLI will not support v0.10.38. Please update to Node 0.12 or io.js.
version: 0.2.2
A new version of ember-cli is available (0.2.3). To install it, type ember update.
Could not find watchman, falling back to NodeWatcher for file system events.
Visit http://www.ember-cli.com/#watchman for more info.
Built project successfully. Stored in "/Users/luke/Examples/iris/tmp/class-tests_dist-DYvAvX3c.tmp".
ok 1 PhantomJS 1.9 - JSHint - .: app.js should pass jshint
ok 2 PhantomJS 1.9 - JSHint - helpers: helpers/resolver.js should pass jshint
ok 3 PhantomJS 1.9 - JSHint - helpers: helpers/start-app.js should pass jshint
ok 4 PhantomJS 1.9 - JSHint - .: router.js should pass jshint
ok 5 PhantomJS 1.9 - JSHint - .: test-helper.js should pass jshint
ok 6 PhantomJS 1.9 - JSHint - unit: unit/ExampleTest.js should pass jshint
1..6
# tests 6
# pass 6
# fail 0
# ok
What am I doing wrong here???
When in doubt look to the docs, tests need to end with a -test.js in order to run.
Related
If I have the scenario with 1000+ tests and want to run only selected portion of them I can use fdescribe.
The rest of tests are skipped which is great however they still pollute the console output. How can I suppress the console output for skipped tests?
If you're running tests via Karma, there is a spec reporter plugin that you can configure to ignore various things.
https://www.npmjs.com/package/karma-spec-reporter
https://www.npmjs.com/package/karma-spec-reporter-2
Add the following to your karma.conf.js:
...
config.set({
...
reporters: ["spec"],
specReporter: {
suppressSkipped: true, // do not print information about skipped tests
},
plugins: ["karma-spec-reporter"],
...
If you're not using Karma, then you need to find the proper Jasmine reporter and configure it, or create your own reporter.
https://www.npmjs.com/package/jasmine2-reporter
If you're using the mocha reporter:
reporters: ['mocha'],
mochaReporter: {
ignoreSkipped: true,
},
I'm having an error with GruntJS when I try to run a grunt watch.
It works fine, until there is a change in the file it is
watching, then, it looks something like this:
Running "watch" task
Waiting...[1] 2464 bus error grunt watch
The number 2464 changes. It seems to be the port that grunt is watching on? But, I'm not sure. Here is my Gruntfile:
module.exports = function (grunt) {
"use strict";
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
sass: {
dist: {
options: {
style: 'compact'
},
files: {
'css/style.css': 'sass/style.scss',
}
}
},
watch: {
files: 'sass/style.scss',
tasks: ['sass']
}
});
grunt.loadNpmTasks('grunt-contrib-sass');
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.registerTask('default', ['sass']);
};
Thanks in advance for all the help!
Do you use Osx maverick?
checkout this: https://github.com/gruntjs/grunt-contrib-watch/issues/204
You need to upgrade node.js to 0.10.22 version:
# Check your version of Node.js. v0.10.20 will still be broken
node –v
# clean your node cache
sudo npm cache clean –f
# install node binary manager ’n’
sudo npm install -g n
# use said ’n’ manager to install latest stable version
sudo n stable
source: http://stephentvedt.com/blog/2013/11/16/grunt-js-watch-bus-error/
Invalid syntax in sass files can also cause grunt or gulp to exit with a bus error. If you've already updated node and reinstalled modules without any success, try running sass --watch <sass glob> and see if there are any errors (import loops can be safely ignored as the cause).
I work on Play 2.1.2 project, using Angular.js, CoffeeScript, require.js and bower to organize front-end.
With bower, I use shim in my /app/assets/javascripts/main.coffee file.
Then I deploy using play clean stage and running target/start.
The problem is: during stage phase, Play doesn't uglify resources.
In Build.scala:
val main = play.Project(appName, appVersion, appDependencies).settings(
requireJs += "main",
requireJsShim += "main.js"
)
Then after uglyfying css in stage:
Tracing dependencies for: main
Error: Load timeout for modules: angular-bootstrap,angular
http://requirejs.org/docs/errors.html#timeout
In module tree:
main
jquery
Error: Load timeout for modules: angular-bootstrap,angular
http://requirejs.org/docs/errors.html#timeout
In module tree:
main
jquery
[info] RequireJS optimization finished.
So nothing was uglified. In main.coffee:
require.config
paths:
jquery: "lib/jquery/jquery"
angular: "lib/angular/angular"
...
shim:
angular: {deps: ["jquery"], exports: "angular"}
...
define [
"angular-bootstrap"
"angular"
...
], ->
app = angular.module "app"
...
app
It works perfectly on client side, all paths are correct and so on.
requireJsShim += "main.js" also looks correct: it looks like require.js optimization takes place after compiling assets, so main.coffee or just main doesn't work.
Any ideas what are the roots of the problem? Have anyone faced it before?
I have an example application using the shim where I just answered a question very similar to yours. In a nutshell, the shim overwrites the app.build.js file.
What finally solved my problem is creating custom shim.coffee with part of require.config in it:
require.config
paths:
jquery: "lib/jquery/jquery"
angular: "lib/angular/angular"
...
Without shim part.
Then I had to explicitly define shimmed dependencies in define clauses and use requireJsShim += "shim.js" -- not the same file that I use for client-side configuration.
Then uglifying and require.js optimization begin to work!
I've encountered exactly this problem (almost; I'm not using CoffeeScript in my project), and it turns out easier to solve that I thought. To restate the issue: certain JavaScript resources—particularly those without an export setting in their shim—would produce the “Load timeout for modules” stated above. Worse, the problem appeared to be transient.
Separating the RequireJS configuration (e.g., paths, shim) from the module seemed to help, but compiling remained unreliable and it made working in development mode more complex.
I found that adding waitSeconds: 0 to the configuration object contributed to reliable builds. Why timeouts are even possible for accessing local resources during compilation is beyond me. See the RequireJS API waitSeconds documentation for details.
Here's a snippet from my RequireJS module, located in public/javascripts (your paths will likely differ).
require({
/* Fixes an unexplained bug where module loads would timeout
* at compilation. */
waitSeconds: 0,
paths: {
'angular': '../vendor/angular/angular',
'angular-animate': '../vendor/angular/angular-animate',
/* ... */
'jquery': '../vendor/jquery/jquery'
},
shim: {
'angular': {
deps: [ 'jquery' ],
exports: 'angular'
},
'angular-animate': ['angular'],
/* ... */
'jquery': {
exports: 'jQuery'
}
},
optimize: 'uglify2',
uglify2: {
warnings: false,
/* Mangling defeats Angular injection by function argument names. */
mangle: false
}
})
define(['jquery', 'angular'], function($, angular) {
/* Angular bootstrap. */
})
I have grunt set up with mocha. It's running fine, but I'd like to get a more detailed report if a test fails from time to time. Naturally I'd just like to run grunt detailedTest instead of modifying the grunt file every time. I thought that I'd be able to:
make a new grunt task named detailedTest
set that test to change the config of the mocha tester
then run the tests
That looks like:
grunt.initConfig
watch:
...
mochaTest:
files: [ 'test/calc/*.coffee', 'test/*.coffee']
mochaTestConfig:
options:
reporter: 'nyan'
timeout: 500
grunt.registerTask "spectest", ->
grunt.config "mochaTestConfig:options:reporter", "spec"
grunt.log.writeln('done with config: '
+ grunt.config "mochaTestConfig:options:reporter")
grunt.task.run('mochaTest')
And the output:
$ grunt spectest
Running "spectest" task
done with config: spec
Running "mochaTest:files" (mochaTest) task
230 _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ ...etc
Well damn, that's not a spec reporter. How can I modify a config before a test? Or should I pass the value into grunt from the command line somehow?
Got it 5 minutes later, naturally. The trick is that accessing grunt tests is done with : at the command line: grunt watch:coffee. But you would modify that config via . notation:
grunt.registerTask "spectest", ->
configPos = "mochaTestConfig.options.reporter"
grunt.log.writeln('before modif config: ' + grunt.config configPos) # nyan
grunt.config configPos, "spec"
grunt.log.writeln('after modif with config: ' + grunt.config configPos) # spec
grunt.task.run('mochaTest')
When I run make test using the normal test harness that CPAN modules have, it will just output a brief summary (if all went well).
t/000_basic.t .......................... ok
t/001_db_handle.t ...................... ok
t/002_dr_handle.t ...................... ok
t/003_db_can_connect.t ................. ok
... snip ...
All tests successful.
Files=30, Tests=606, 2 wallclock secs
Result: PASS
If I run the tests individually, they output much more detailed information.
1..7
ok 1 - use DBIx::ProcedureCall::PostgreSQL;
ok 2 - simple call to current_time
ok 3 - call to power() with positional parameters
ok 4 - call to power() using the run() interface
ok 5 - call to setseed with a named parameter
ok 6 - call a table function
ok 7 - call a table function and fetch
How can I run all the tests in this verbose mode? Is there something that I can pass to make test?
The ExtUtils::MakeMaker docs explain this in the make test section:
make test TEST_VERBOSE=1
If the distribution uses Module::Build, it's a bit different:
./Build test verbose=1
You can also use the prove command that comes with Test-Harness:
prove -bv
(or prove --blib --verbose if you prefer long options.) This command is a bit different, because it does not build the module first. The --blib option causes it to look for the built-but-uninstalled module created by make or ./Build, but if you forgot to rebuild the module after changing something, it will run the tests against the previously-built copy. If you haven't built the module at all, it will test the installed version of the module instead.
prove also lets you run only a specific test or tests:
prove -bv t/failing.t
You can also use the prove command:
prove --blib --verbose
from the unpacked module's top directory. --blib includes the needed directories for a built but not installed module distribution.