Trying to figure out what's the status of GWTTestCase suite/methodology.
I've read some things which say that GWTTestCase is kind of obsolete. If this is true, then what would be the preferred methodology for client-side testing?
Also, although I haven't tried it myself, someone here says that he tried it, and it takes seconds or tens of seconds to run a single test; is this true? (i.e. is it common to take tens of seconds to run a test with GWTTestCase, or is it more likely a config error on our side, etc)
Do you use any other methodology for GWT client-side testing that has worked well for you?
The problem is that any GWT code has to be compiled to run within a browser. If your code is just Java, you can run in a typical JUnit or TestNG test, and it will run as instantly as you expect.
But consider that a JUnit test must be compiled to .class, and run in the JVM from the test runner main() - though you don't normally invoke this directly, just start it from your build tool or IDE. In the same way, your GWT/Java code must be compiled into JavaScript, and then run in a browser of some kind.
That compilation is what takes time - for a minimal test, running in only one browser (i.e. one permutation), this is going to take a minimum of 10 seconds on most machines (the host page for the GWTTestCase to allow the JVM to tell it which test to run, and get results or stacktraces or timeouts back). Then add in how long the tested component of your project takes to compile, and you should have a good idea of how long that test case will take.
There are a few measures you can take to minimize the time taken, though 10 seconds is pretty much the bare minimum if you need to run in the browser.
Use test suites - these tell the compiler to go ahead and make a single larger module in which to run all of the tests. Downside: if you do anything clever with your modules, joining them into one might have other ramifications.
Use JVM tests - if you are just testing a presenter, and the presenter is pure Java (with a mock vide), then don't mess with running the code in the browser just to test its logic. If you are concerned about differences, consider if the purpose of the test is to make sure the compiler works, or to exercise the logic.
Related
I am writing some performance profiling (benchmark) test code as suggested here: https://docs.flutter.dev/cookbook/testing/integration/profiling. However, the documentation suggests I should run it like:
flutter drive --driver=test_driver/perf_driver.dart --target=integration_test/scrolling_test.dart --profile
As we know, that will need even minutes to run once even if only changed one line of code, because it goes through the whole Android/iOS compilation. That takes a lot of time when debugging the correctness of my test code.
On the other hand, we know Flutter has powerful hot reload/restart which only takes seconds to reload code changes. Can we utilize that to easily debug benchmark test code?
I would like to share my approach here:
You can run
flutter run integration_test/scrolling_test.dart --no-dds
and that is all. You can hot restart now for free! (The --no-dds in my case is needed because otherwise it errors; but in your case it may not be needed.)
But bear in mind some drawbacks:
To allow hot restart, it should be run in debug mode, but profiling/benchmarking should only be run in profile mode or the numbers are completely useless. Thus, only debug the correctness of your code and never trust the numbers in this mode.
It will not collect data and write to host machine after tests finished in this approach, because no driver (e.g. test_driver/perf_driver.dart) exists. But since we should not use the numbers in this mode, it is no problem.
I have a suite of Webdriver tests in NUnit that I have running repeatedly on a TeamCity server. I'm getting the driver type from a config file that is altered between test runs; this was the most elegant way of doing cross-browser testing within NUnit that I could come up with. Fortunately, TeamCity combines the output of all the test runs quite neatly. Unfortunately, when a test fails more than once, only one failure and its accompanying stack trace is displayed, with an annotation of "2 failures in one build." Since these are actually different tests, being in different browsers, I would like to view the error outputs separately. Is there a way to do this?
For cross browser testing you can consider using Grid 2 instead of changing the browser type from the configuration file.
In addition you'll be able to run in your tests in parallel on different browsers
and solve your multiple test failure information
Well, StackOverFlow is such a good site, most of my google search results direct to here, really.
I've seen too many posts about selenium grid2 inside and outside this place, they all explained that, grid2 has such capability to run tests in parallel, how to set up grid hub and nodes. But, no one told me how to run tests through selenium grid2, all I got was "set up hub and nodes, then run tests, then all things become parallel". But how to trigger the running through selenium grid2?
Then, I got answers myself, that is, to trigger the running with another runner, e.g., NUnit. However, NUnit can only run tests serially, not parallelly. I've also tried other runners but they can't function so well along with grid2.
So I started to doubt, whether selenium grid2 really has such capability to run tests in parallel on its own? If so, how to? What is the whole workflow?
If not, then a third-party tool is needed to trigger the running, what's more, the third-party tool must be able to trigger multiple tests at one time(multi-thread, something like that?), so that grid2 can deliver those tests to its nodes to run them at the same time. In this way, can we call it a "parallel running".
What third-party tool would be a good choice? NAnt? Jenkins?
I have a long story coping with grid2 these days, these statements above are just part of it. If you can come up with anything, please tell me, that would be really appreciated.
我对自己的英文还是有信心的,在此多谢各位的帮忙了!谢谢!
Selenium Grid 2 is capable of executing tests in parallel "provided you pass multiple commands simultaneously to the hub". You need to use a different framework like NUnit, testNG to run multiple test cases simultaneously. I use testNG for triggering multiple tests in parallel. It works absolutely fine without any issues. You can find some help on getting started here
WebDriver driver = new RemoteWebDriver(new URL("http://localhost:4444/wd/hub"), capability);
as descripted here:
http://code.google.com/p/selenium/wiki/Grid2
Tests are passed to a node which executes.
We often use test suites to ensure we start from a known, working situation.
In this case, we want the whole test suite to run to test all the pages. If some fail, we still want the other tests to run.
The test suite seems to stop the moment it finds an error in one of the tests.
Can it be set-up to keep going and run all the tests, irrespective of results?
Check you are using Verify and not Assert. Assert will STOP your system
Each test case restart from a fixed point (like load base URL)
Set your timeout (how long you want the system to wait before moving on)
On numerous unrelated, projects the CPU usage of NUnit has often ended up being about 50% even when I'm not running my tests. From other information I've read this is supposedly more to do with my code than Nunit.
Does anyone know how I can isolate the problems in my code that will be causing this and fix them?
Thanks
I have the same problem and it seems to be rather consistently affecting only one test project doing integration testing (calling web services, checking stuff over HTTP, etc). I'm very careful to dispose of networked objects (with using(...){ }), so I don't quite understand why NUnit should continue to use 90% CPU days after the test is done with and all objects in use by the test should be disposed of.
The really strange thing is that while running the test, NUnit uses no more than 10%-50% CPU. It's only after the test has completed that CPU usage surges and stays constantly at 80%-100% forever. It's really strange. Reloading or closing the test project (File > Close) doesn't help either. NUnit itself needs to be closed.