I don't want to pollute/mix the unit test cases(written using QuickSpec) which I have written in UNIT TESTING TARGET with UI tests from EarlGrey.
Is there any alternative that I can have EarlGrey tests to be added to UI TESTING TARGET?
EG 2 is currently in the process of being released which allows you to add EarlGrey tests in a UI Testing target. Please stay tuned on the slack channel for more information.
Related
When running Flutter integration tests, it only renders frames when necessary. This makes sense, since the tests should run as fast as possible, however it makes it very difficult to debug test failures because it's hard to see what's going on. Is there a way to manually force the integration test runner to render all frames, as though the app were being used normally?
Is this just one of those things that enterprise developers need to get used to?
Is there some advantage to every time you run your unit tests the application needs to spin up?
Am I do something so completely wrong that no-one understands this question - googling doesn't seem to provide anyone else moaning about it.
Jon Reid wrote a good blog post about not running the application when running unit tests: http://qualitycoding.org/app-delegate-for-tests/.
The idea is basically replacing the App Delegate in main.swift for an
"empty" App Delegate when you run the tests target, so you don't run any application logic when running the tests.
I have a suite of Webdriver tests in NUnit that I have running repeatedly on a TeamCity server. I'm getting the driver type from a config file that is altered between test runs; this was the most elegant way of doing cross-browser testing within NUnit that I could come up with. Fortunately, TeamCity combines the output of all the test runs quite neatly. Unfortunately, when a test fails more than once, only one failure and its accompanying stack trace is displayed, with an annotation of "2 failures in one build." Since these are actually different tests, being in different browsers, I would like to view the error outputs separately. Is there a way to do this?
For cross browser testing you can consider using Grid 2 instead of changing the browser type from the configuration file.
In addition you'll be able to run in your tests in parallel on different browsers
and solve your multiple test failure information
I'm running the test to verify that a method was called on an OCMockObject. I'm expecting it to fail and in the issue Navigator it is indeed showing it's failing, but the Notification center is popping up (before the all the tests finish running) and saying Tests Succeeded.
I realize it's most likely a threading issue because one of my tests that is run each time reads from a massive JSON file, parses and then runs tests to confirm the parsing. I'm betting this is slowing the tests down, but I would think the unit test runner would account for this, apparently it's not and probably only considers the OCUnit tests as passing (in reality I'm using a 3rd party fluent assertion library that doesn't use OCUnit methods).
Actual Question:
Is there a way to slow the unit test runner to wait until all methods finished? I also am not doing any kind of threading inside the tests or in the code that's being tested.
I'm developing a static library in Obj-C for a CocoaTouch project. I've added unit testing to my Xcode project by using the built in OCUnit framework. I can run tests successfully upon building the project and everything looks good. However I'm a little confused about something.
Part of what the static library does is to connect to a URL and download the resource there. I constructed a test case that invokes the method that creates a connection and ensures the connection is successful. However when my tests run a connection is never made to my testing web server (where the connection is set to go).
It seems my code is not actually being ran when the tests happen?
Also, I am doing some NSLog calls in the unit tests and the code they run, but I never see those. I'm new to unit testing so I'm obviously not fully grasping what is going on here. Can anyone help me out here?
P.S. By the way these are "Logical Tests" as Apple calls them so they are not linked against the library, instead the implementation files are included in the testing target.
Code-wise, how are you downloading your data? Most of the time URL connection methods are asynchronous, and you get notification of data being available as a call off the run loop. You very likely are not running the run loop.
If this is the problem, read up on run loops.