I use Karma for running unit tests using framework Jasmine and PhantomJS. The problem is that PhantomJS is not releasing memory properly and when it exceeds 1GB then it crashes. It is probably same or very similar case as described here PhantomJs Crashes while running with grunt-karma test cases ????/
Based on https://github.com/ariya/phantomjs/blob/master/src/webpage.cpp I see that there is void WebPage::clearMemoryCache().
Any idea how to trigger clearMemoryCache after run every describe in test?
I found that node_modules\karma-phantomjs-launcher\index.js has self.specSuccess method where I possiblu could force PhantomJS to clear memory. However, I can't find PhantomJS instance and even then how to execute WebPage::clearMemoryCache().
Related
We are trying to run opa5 test case in our pipeline using Karma runner with UI5 config but it doesn't work. Here we checked all the test case and they are running fine when we run them in parts or if they are run individually. Issue is when all the test cases are run together it gives error. When we run the tests in headless chrome browser we get Aw- Snap error.Err.1
Also when we run the test cases in git bash we get multiple errors which is never consistent. In some it looks like the issue is with the test cases but when running the test cases in chrome or individually the test cases run finely.
Attaching one of the errors. The errors are never consistent and somehow when crosschecked test cases run finely out of bash.
Err
Possible reason could be we have many test cases(approx.120),
Any help is much appreciated.
Recently, I ran into an issue at work. To summarize it, I am currently working on writing test cases for an internal Flutter project. This had been working fine for a while, but after my summer vacation, it started breaking. It took me a while to figure out what roughly was going on, but I hen discovered that running integration tests generated a file named generated_main.dart under .dart_tool/flutter_build/. This was working just fine when using integration tests, however, it caused unit tests to fail due to not even loading.
I managed to identify the relevant line of code in the generated file as line eight: import 'file:///tmp/flutter_tools.WJSDQT/flutter_test_listener.YHXZLS/listener.dart' as entrypoint;. The capitalized strings of random characters are indeed random, apparently referring to a temporary path that only exists for the unit test's duration. Unit tests throw (even though not always reliably): .dart_tool/flutter_build/generated_main.dart:8:8: Error: Error when reading '/tmp/flutter_tools.WJSDQT/flutter_test_listener.YHXZLS/listener.dart': No such file or directory.
Surprisingly, there seem to be no issues when deleting the generated file. I have now added a print statement to the integration tests that tells whoever ran them that they need to delete the generated file. But this is obviously not a good solution long-term. Is there a way to disable this behavior so that testing does not become unnecessarily more complicated?
My environment is Ubuntu 20.04 LTS with Flutter 2.5.0-5.1.pre on channel beta, using Android Studio 2020.3.1. I am not currently able to test for this behavior on other platforms. The program needs to run on Linux. The test package version is 1.17.10, the newest one compatible with other dependencies.
Thank you in advance for your time.
I have some code that warms up my app's caches that I'd like to run in production or when I start my app with sbt run. However, when I run sbt console, I'd like to skip this code so that I can get to testing on the REPL very quickly without any delays.
Is there a way to detect if my app is being run within sbt console so that I can avoid warming up the caches?
I'm new to using Eclipse for Grails (using STS) and I'm trying to figure out an easy way to run the unit tests. I've seen that I can do it by right clicking Run As > Grails Command (test-app). This works but is slow and the test output goes to the test report html page and has no apparent clickable stack traces.
I can also do Run As > JUnit Test, which appears to be much faster and gives me the traditional JUnit console available in non-Grails tests. When running unit tests, is there a difference in the two? Is the grails command setting up other things or doing anything else?
You are performing a full blown test with all bells and whistles on. :)
According to the docs:
test-app: Runs all Grails unit and integration tests and generates reports.
Setting up the container for the integration tests is what makes it more 'expensive'.
You can limit the test cases that are being run by using 'unit:' as a parameter to indicate that only unit tests need to be run. (When not using JUnit directly from eclipse)
In your case you could do:
test-app unit:
or for a specific FooBarTests.groovy file:
test-app unit: FooBar
optionally you can add -echoOut or -echoErr to get more verbose output.
Check out the docs for more info and different phases of testing.
I found examples on how to debug your unit test in Cocoa or the ADC page here.
But I can't get the debugging to work for an iPhone app target. I can get the tests up and running and they are run during the build, but what I need is to debug the tests for some of the more complex failures.
You might consider moving your tests to GHUnit, where they run in a normal application target, so debugging is straightforward.
This can be done by setting up a separate Executable for the project that uses the otest tool to run the unit tests, after setting a bunch of relevant environment variables for the executable. I have used this method to successfully debug SenTestKit logic unit tests.
I found the following links helpful:
http://www.grokkingcocoa.com/how_to_debug_iphone_unit_te.html (also contains help to fix common errors encountered setting up the project).
http://cocoawithlove.com/2009/12/sample-iphone-application-with-complete.html (covers both logic tests and application tests)
http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man1/otest.1.html (Man Page for otest XCode tool)
The NSLog messages show up in Console.app
Should give you a starting point.
In Xcode 4, you can set breakpoints in your unit tests.
Create a new project with "include unit tests" checked.
Put a breakpoint in the failing unit test.
Press Command-U to test.
If you do Build & Go instead of just build, then you can set breakpoints in your unit tests and debug them traditionally. This is if you are using the google toolbox for iphone unit testing; i don't know how you are doing it and if the process is different.