I have a simple monolithic application generated using JHipster v4.10.1 with front-end using Angular 4.x. To run JavaScript unit tests, as suggested in the documentation I ran
./node_modules/karma/bin/karma start src/test/javascript/karma.conf.js --debug
The command runs the tests, reports coverage summary and exits, whether tests all pass or some test fail does not matter. Test run output does show at one point that the debug server is loaded:
21 11 2017 13:41:20.616:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/
But because the command exits, the Karma debug server can not be accessed. How to run tests so that Karma console can be used in browser to debug?
Figured out that the magic flag is actually single-run which seems to be true by default. So the main command to run for JS debug is:
yarn test --single-run=false
which in turn runs
$ karma start src/test/javascript/karma.conf.js --single-run=false
With this the command will only exit with explicit kill e.g. with Ctrl+C or Z. Karma debug console can then be accessed on http://localhost:9876/debug.html (assuming default port is not already busy. If it is, test output should tell you which port was chosen).
Additionally you need to disable minimization (and also istanbul config - not sure why) so that you can breakpoint and step through the .ts code in debugger easily. I figured this is done by making following changes in webpack/webpack.test.js file:
Remove following istanbul config from module.rules array:
{
test: /src[/|\\]main[/|\\]webapp[/|\\].+\.ts$/,
enforce: 'post',
exclude: /(test|node_modules)/,
loader: 'sourcemap-istanbul-instrumenter-loader?force-sourcemap=true'
}
Add minimize: false to the LoaderOptionsPlugin under plugins array:
new LoaderOptionsPlugin({
minimize: false,
options: {
tslint: {
emitErrors: !WATCH,
failOnHint: false
}
}
})
Related
I have a pytest suite running in this env:
Test session starts (platform: linux, Python 3.6.1, pytest 3.3.1, pytest-sugar 0.9.1)
plugins: flaky-3.5.3, dependency-0.3.2, forked-0.2, logger-0.4.0, sugar-0.9.1, xdist-1.24.1
I have a parametrized test, decorated with flaky, and it is supposed to be re-run max three times if it fails.
#pytest.mark.flaky(max_runs=3) # re-run this test in case it fails
def test_cucubau(getBauBau_fixture):
assert cucubau(getBauBau_fixture) == True
However, it fails only once, it is not re-run, and my flaky test report is empty.
===Flaky Test Report===
===End Flaky Test Report===
Based on what I read about flaky plugin, the usage should be trivial.. but I'm not able to see what is wrong with my code.
any idea?
I believe you need the pytest-rerunfailures plugin for that to work. Then you should be able to annotate your test with #pytest.mark.flaky(reruns=3).
Currently I am trying a setup an end to end protractor tests to a a bitbucket pipelines with set up an headless chrome and i am currently getting some error message:
Failed: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
Any clue for this? how ever running tests locally is working fine; Can i set a constant session id?
Thanks
Check out your configuration file for this object
capabilities: {
"browserName": "chrome",
"chromeOptions": {
"args": ["incognito", "--window-size=1920,1080", "disable-extensions", "--no-sandbox", "start-maximized", "--test-type=browser"],
"prefs": {
"download": {
"prompt_for_download": false,
"directory_upgrade": true,
"default_directory": path.join(process.cwd(), "__test__reports/downloads")
}
}
}
},
When you find it, make sure you included "--no-sandbox" argument into args property.
What this guy does is it allows your tests to be ran from a remote container. In the meantime, if you include the argument when you run your tests on your machine, it has side effects like described here Chrome Instances don't close after running Test Case in Protractor
We have recently upgraded to angular 5. Since then my protractor tests started failing with reason " Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.".
All these tests were working fine before.
Protractor version : 5.2.0
karma version: 1.7.0
Highly appreciate your suggestions.
Thanks
This is a Jasmine timeout, see the Protractor guidance on Jasmine timeouts:
Timeouts from Jasmine
Spec Timeout
If a spec (an 'it' block) takes
longer than the Jasmine timeout for any reason, it will fail.
Looks like: a failure in your test results - timeout: timed out after
30000 msec waiting for spec to complete
Default timeout: 30 seconds
How to change: To change for all specs, add jasmineNodeOpts:
{defaultTimeoutInterval: timeout_in_millis} to your Protractor
configuration file. To change for one individual spec, pass a third
parameter to it: it(description, testFn, timeout_in_millis).
Try to debug your test, instructions here. Following any change, including an upgrade, it's possible your test may be broken; resulting in it hanging beyond the duration of the default Jasmine timeout.
A lazy option would be to increase your Jasmine timeout excessively, to see if your test fails with a different exception.
I am trying to set some appium desired capabilities in the terminal window so that I can, for example, run my tests against different simulator devices:
Terminal: $ appium --device-name 'iPhone 6'
However, I am have to setup desired capabilities in my actual code, so I have a valid instance of IOSDriver. I use this code:
capabilities.setCapability("platformName", "iOS");
capabilities.setCapability("platformVersion", "8.3");
capabilities.setCapability("app","../Build/Products/Debug-iphonesimulator/LightAlarm.app");
driver = new IOSDriver(new URL("http://0.0.0.0:4723/wd/hub"),capabilities);
When I run my tests I get an error that deviceName is not being set:
The following desired capabilities are required, but were not provided: deviceName
However, my terminal appium server is all setup correctly:
info: Welcome to Appium v1.4.0 (REV dc30dae9e8fe8c85eeea707dbdbd60350fdff55b)
info: Appium REST http interface listener started on 0.0.0.0:4723
info: [debug] Non-default server args: {"deviceName":"iPhone 6"}
info: Console LogLevel: debug
Any ideas what might be going wrong?
I am writing an integration test that needs to start up several applications. One of these applications is a Play one as a SBT project called appA.
I am able to start the app on the right port using scala.sys.process as follows:
import scala.sys.process._
import org.scalatest._
class Main extends FeatureSpec with Matchers{
val app = Seq("sbt", "project appA", "run 7777").run
println(app.exitValue)
}
The spawned application however exits immediately with return value 0. No errors are displayed to the console. I just see:
[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:3000
(Server started, use Ctrl+D to stop and go back to the console...)
[success] Total time: 1 s, completed Feb 27, 2014 10:26:56 PM
0
The 0 at the end of the output is from calling exitValue on the created process. exitValue blocks until the spawned process exits.
How can I run the Play application without it exiting immediately? Is there a better way to start the application?
SBT has 2 run modes - interactive and batch. If you run without any args it goes to interactive mode and does not exit. When you run it by passing commands it runs in a batch mode and exits when the last command is complete. It does not matter if your application inside SBT runs in a forked JVM or not.
Thus to "fix" it you can apply this hack: add ~ command to the end of the list of sbt commands/args:
val app = Seq("sbt", "project appA", "run 7777", "~").run
~ is used to watch source code for changes and recompile when it happens. Thus SBT will never exit unless stopped by a user or killed.
A cleaner way would be to run Play application in a Jetty container (assuming you have WAR to run) or such by calling a main class that starts up Jetty with a command like java com.example.MyMain but that requires additional setup.