Which is the best approach for testing Flutter Apps - flutter

I'm working for a Flutter App which relies on an API. We are thinking about a testing strategy and we would like to know which should be the best approach.
According to their documentation ( https://flutter.dev/docs/testing ) they have 3 levels of tests:
Unit tests
Widget tests
Integration tests (Pump widgets new approach)
Integration tests (Flutter driver old approach)
As we have limited resources, we would like to know what we should pickup first. Since until now, very few effort was put on testing.
Our situation is the following:
Unit tests (50% coverage)
Widget tests (0% coverage)
Integration tests (Pump widgets new approach - 0% Coverage)
Integration tests (Flutter driver old approach - Only a few test scenarios covered, the main flows)
API Tests: 0% coverage on unit tests and functional tests
And we are not using any testing automation framework such as WebdriverIO + Appium.
We would like to know how much effort we should put in each of the Flutter test categories, and regarding Flutter integration tests, would it make sense to just have Integration tests with the new approach (Pumping every widget) or we would also need Integration tests (Flutter driver old way)?. Relying only on the integration testing using pump widget approach doesn't make us feel very confident.
Some options we are considering are:
Strong API coverage (unit test and functional test) + Strong coverage on Flutter unit tests + Few Integration tests using flutter driver approach
Testing pyramid approach : Lots of unit tests + Less amount integration tests using pump widget new approach ,API tests and Widget tests + Less amount of E2E tests (maybe using Integration tests using flutter driver approach or an external automation framework) and manual tests
Just unit test + Widget test + Integration tests the new approach of pumping widgets, trying to achieve 100% coverage in each of the three.
We also think that maintaining integration tests the new way (pumping widgets) is somehow very time consuming as you need to have good understanding of the views and the internals of the App. Which might be challenging for a QA Automation guy who hasn't got too much experience with Flutter development.
Which of the Flutter automated testing categories I should cover first, unit, widget or integration testing? Should I use an external automated framework such as WebdriverIO + Appium instead?

First, at this moment, I would suggest to think about testing in the application perspective, not on Flutter, React-native or Native perspective, well, test pyramid and test concepts are not really linked to any development tool/framework, at the end of the day the app must do what it's supposed to do gracefully.
Now, on the strategy topic, depends on a lot of variables, I will push just some to this answer, otherwise I will write an article here.
There is some stuff to think about, even before writing the strategy:
When we will test?
Local build and testing.
Remote build and testing (CI/CD stuff).
Pre merge testing (CI/CD stuff).
Pre production testing (CI/CD stuff).
Production monitoring tests (CI/CD stuff).
Do we have enough resources?
At least one person dedicated person for testing and it's tasks.
VMs/computers hosted by your company or cloud providers to run the tests in a CI/CD pipeline.
On my previous experiences with testing, when you are starting (low amount of coverage), end-to-end testing are the ones that did show more value, why?
It's mostly about the user perspective.
It will answer things like "Can the user even login on our app and perform a core task?" If you cannot answer this before a release, well you are in a fragile situation.
Covers how the application screens and feature behave together.
Covers how the application integrate with backend services.
Well, if it has issues with the API, it will most likely to be visible on the UI.
Covers if data is being persisted in a way that make sense to the user
It might be "wrong" on the database, but for who is using it still makes sense.
You don't need 500 hundred tests to have a nice coverage, but still this kind of test is costly to maintain.
The problem with the base (fast and less costly tests) of the pyramid when you have "nothing" is, you can have 50000 unit tests, but still not answer if the core works, why? For you to answer it you need to be exposed to a real, or near real world, unit doesn't provide it for you. You will be really limited to answer things like: "well in case the input is invalid it will show a fancy message. But can the user login?"
The base is still important and the test pyramid is still a really good thing to use for guidance, but my thoughts for you folks right now, as you are starting, try to get meaningful end-to-end cases, and make sure that they are working, that the core of the application, at every release, is there, working as expected, it's really good to release with confidence.
At some point the amount of end-to-end will increase, and you will start seeing the cost of maintaining it, so then you could start moving things down one step bellow in the pyramid, checks that were made on the e2e, now can be on the integration level, on the widget level, and etc.
Testing is a iterative and incremental work also, it will be changing as the team matures, trying to go strait to the near perfect world with it, will cause a lot of problematic releases, my overall point is, at first, try to have tests that gives meaningful answers.
Another note is: Starting in the top of the pyramid that is not supposed to be linked to any development framework (Flutter, react-native and etc) will also give you time to get up to speed into Flutter, while you are still contributing to e2e coverage, using things like Appium (SDETS/QA must have some familiarity with it) for example, could be a parallel work.

Related

Citrus vs Mockito

I have recently come across the Citrus framework and was trying to use it.
But I found I can use Mockito as well in mocking the external service to do some Integration testing of my code.
What are specific advantages of using Citrus over Mockito?
I have recently come across the Lasagne and was trying to eat it. But I found I can use Pizza as well in eating at the dinner to do some food filling of my stomach.
What are the specific advantages of eating Lasagne over Pizza?
Do you get my point here? Your question is way too generic to get a good answer as you always need to decide based on the use case and what you want to do. Besides that Mockito and Citrus are not really competitors in my opinion as they have completely different focus.
Mockito and Citrus both are test frameworks, right but they have completely different goals in testing your software.
Mockito is used in unit testing with a more deep insight of classes and methods used so you can create a level of abstraction when testing a piece of your code. Citrus has focus on integration testing with messaging interfaces involved where your application under test is deployed somewhere in a shippable state and messages do really go over the wire.
At the moment you are doing unit testing not integration testing. Which is great! You need a lot of unit tests and Mockito is a really good choice to help you there. At some point you may want to test your software with focus on integration with other components in a production-like environment with completely different focus (non-functional requirements, deplyoment, configuration, messaging, security and so on). This is the time to go for integration testing to make sure that you meet the interfaces to other components and to make sure that others can call your services in a well designed way.
Which part of the testing train you need and which part you may want to stress in your project is totally based on the requirements and your software architecture.
Long sentence short I would recommend that you go three steps backwards in order to clarify what you want/need to test in your application and then pick the tools that help you to reach that goals.

Regarding Android Appium automation tests

Our Android app is very large and we need a good portion of our code to be covered via Automation testing and we use Appium for this. Most of our Appium tests exercise portions of code that call endpoints and hence are time consuming. To a question by user asking how to mock endpoints, a reply in Appium forum (https://discuss.appium.io/t/how-to-mock-api-backend-in-native-apps/4183) seems to suggest to use Appium for end to end tests only?
My question is how in industry Appium tests are written? By definition of test pyramid, we should write very few end to end tests. So are industry apps using Appium have very few such tests? Is it uncommon to try to mock endpoints when using Appium? If not, how to mock endpoints with appium, for e.g. using WireMock?
Regards,
Paddy
Appium is perfect framework for UI automation of mobile apps, but it is definitely not intended to give 100% test coverage of the product (that includes not only mobile/web clients, but back-end services, databases, etc.)
The good practise with Appium (same to WebDriver on web) is too write short quick tests as much as its possible, meaning generate/prepare/remove test data or application state via api calls/database interactions.
Appium does not force you to implement fully e2e tests: you can easily start appropriate Activity via Appium and continue test from that step, skipping bunch of previous steps covered in other test.
The common problem is when engineers are trying to build mobile automation testing on UI actions only relying on Appium, so that tests become time consuming, flaky and slow.
There is no much sense to start your UI tests suite if auth service is down, right? And the quick way to get those errors is to have good API/integration test coverage - here we are back to test pyramid ;)

E2E Test Metrics

Apologies if this question is better suited for Stack Exchange Programmers, I've posted this on both sites cause I figured it was on the fence.
Question: Are there any valuable metrics on E2E automated tests? (Does it make sense to gather any data around them?)
Context: For example, when I wrote my unit tests, I implemented a test coverage package which covers % of Classes, # of methods touched etc...
A few points I've come across:
Test efficiency (speed of specs)
App performance (response/loading time)
Automation Progress (# of tests currently automated VS # of tests able to be automated)
Defect Efficiency (# of defects found during testing VS # of defects found after delivery)
Any ideas? If it matters, I'm using Protractor on an Angular app.
Something that comes to my mind with protractor:
You can measure code coverage with e2e tests as well, but that would require some tricks to setup, check this: https://www.npmjs.com/package/protractor-istanbul-plugin
https://www.npmjs.com/package/grunt-protractor-coverage
Be aware that it is not clean solution - your code could be minified, server side not included, just keep this in mind.
Also you could measure page performance (load speed, JS execution speed, CSS render other client-side calculations) with something like protractor-perf:
https://github.com/axemclion/protractor-perf
But also keep in mind that it requires a lot of preparations to achieve that, but anyway it is cool.
About Automation Progress/Percent automatable - i think you can't track this automatically, only if your requirements will be very detailed, and stored in some system with API. Then you could link your test case with specific requirement and track that. I never saw this working actually.
Defect Efficiency - easier to track with JIRA reports.
Yes, it makes sense to collect data on E2E test metrics. Apart from test cases execution, performance and other measurements,it helps us to take the decision for next phase of activities such as, estimate the cost & schedule of future projects.
Understand the kind of improvement required to success the project.
Take decision on process or technology to be modified etc. Test Metrics are the most important to measure the quality of the software.

MVC 5 Unit tests vs integration tests

I'm currently working a MVC 5 project using Entity Framework 5 (I may switch to 6 soon). I use database first and MySQL with an existing database (with about 40 tables). This project started as a “proof of concept” and now my company decided to go with the software I'm developing. I am struggling with the testing part.
My first idea was to use mostly integration tests. That way I felt that I can test my code and also my underlying database. I created a script that dumps the existing database schema into a “test database” in MySQL. I always start my tests with a clean database with no data and creates/delete a bit of data for each test. The thing is that it takes a fair amount of time when I run my tests (I run my tests very often).
I am thinking of replacing my integration tests with unit tests in order to speed up the time it takes to run them. I would “remove” the test database and only use mocks instead. I have tested a few methods and it seems to works great but I'm wondering:
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I'm sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Do you think going from integration testing to unit testing is a king of “downgrade”?
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
I'm aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I'm aware that this kind of question is prone to “it depends of your needs” kind of responses but I'm sure some may have been through this and can share their knowledge. I'm also aware that in a perfect world I would do both (tests by using mocks and by using database) but I don’t have this kind of time.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I’m sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Yes, if you only test your code using Mocks, it's very easy for you to have false confidence in your code. When you're mocking the database, what you're doing is saying "I expect these calls to take place". If your code makes those calls, it'll pass the test, but if they're the wrong calls, it won't work in production. At a simple level, if you add / remove a column from your database the database interaction may need to change, but the process of adding/removing the column is hidden from your tests until you update the mocks.
Do you think going from integration testing to unit testing is a king of “downgrade”?
It's not a downgrade, it's different. Unit testing and integration testing have different benefits that in most cases will complement each other.
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
Ok is very subjective. I'd say no, however you don't have to run all of your tests all of the time. Most testing frameworks (if not all) allow you to categorise your tests in some way. This allows you to create subsets of your tests, so you could for example have a "DatabaseIntegration" category that you put all of your database integration tests in, or "EndToEnd" for full end to end tests. My preferred approach is to have separate builds. The usual/continuous build that I would run before/after each check-in only runs unit tests. This gives quick feedback and validation that nothing has broken. A less common / daily / overnight build, in addition to running the unit tests, would also run slower / repeatable integration tests. I would also tend to run integration tests for areas that I've been working on before checking in the code if there's a possibility of the code impacting the integration.
I’m aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I haven't used them, so this is speculation. I would imagine the main benefit is that rather than having to simulate the database interaction with mocks, you instead setup the database and measure the post state. The tests become less how you did something and more what data moved. On the face of it, this could lead to less brittle tests, however you're effectively writing integration tests against another data provider that you're not going to use in production. If it's the right thing to do is again, very subjective.
I guess the second benefit is likely to be that you don't necessarily need to refactor your code in order to take advantage of the in memory database. If your code hasn't been constructed to support dependency injection then there is a good chance that you will need to perform some level of refactoring in order to support mocking.
I’m also aware that in a perfect world I would do both (tests by using mocks and by using database) but i don’t have this kind of time.
I don't really understand why you feel this is the case. You've already said that you have integration tests already that you're planning on replacing with unit tests. Unless you need to do major refactoring in order to support the unit-tests your integration tests should still work. You don't usually need as many integration tests as you need unit tests, since the unit tests are there to verify the functionality and the integration tests are there to verify the integration, so the overhead of creating them should be relatively small. Using categorisation to determine which tests you run will reduce the time impact of running your tests.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
I've used quite a few different mocking libraries and for the most part, they are all very similar. Some things are easier with different frameworks, but without knowing what you're doing it's hard to say if you will notice. If you haven't built your code with dependency injection in mind then you may have find it challenging getting your mocks to where you need them.
Mocking of any kind is generally quite fast, you're usually (unless you're using partial mocks) removing all of the functionality of the class/interface you're mocking so it's going to perform faster than your normal code. The only performance issues I've heard about are if you're MS fakes/shims, sometimes (depending on the complexity of the assembly being faked) it can take a while for the fake assemblies to be created.
The two frameworks I've used that are a bit different are MS fakes/shims and Typemock. The MS version requires a certain level of visual studio, but allows you to generate fake assemblies with shims of certain types of object that means you don't have to pass your mocks from your test through to where they're used. Typemock is a commercial solution that uses the profiling API to inject code while your tests are running which means it can reach parts other mocking frameworks can't. These are both particularly useful if you've got a codebase that hasn't been written with unit testing in mind that can help to bridge the gap.

iOS Tests/Specs TDD/BDD and Integration & Acceptance Testing

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the best technologies to use for behavior-driven development on the iPhone? And what are some open source example projects that demonstrate sound use of these technologies? Here are some options I've found:
Unit Testing
Test::Unit Style
OCUnit/SenTestingKit as explained in iOS Development Guide: Unit Testing Applications & other OCUnit references.
Examples: iPhoneUnitTests, Three20
CATCH
GHUnit
Google Toolbox for Mac: iPhone Unit Testing
RSpec Style
Kiwi (which also comes with mocking & expectations)
Cedar
Jasmine with UI Automation as shown in dexterous' iOS-Acceptance-Testing specs
Acceptance Testing
Selenium Style
UI Automation (works on device)
UI Automation Instruments Guide
UI Automation reference documentation
Tuneup js - cool library for using with UIAutomation.
Capturing User Interface Actions into Automation Scripts
It's possible to use Cucumber (written in JavaScript) to drive UI Automation. This would be a great open-source project. Then, we could write Gherkin to run UI Automation testing. For now, I'll just write Gherkin as comments.
UPDATE: Zucchini Framework seems to blend Cucumber & UI Automation! :)
Old Blog Posts:
Alex Vollmer's UI Automation tutorial
O'Reilly Answers UI Automation tutorial
Adi Saxena's UI Automation tutorial
UISpec with UISpecRunner
UISpec is open source on Google Code.
UISpec has comprehensive documentation.
FoneMonkey
Cucumber Style
Frank and iCuke (based on the Cucumber meets iPhone talk)
The Frank Google Group has much more activity than the iCuke Google Group.
Frank runs on both device and simulator, while iCuke only runs in simulator.
Frank seems to have a more comprehensive set of step definitions than iCuke's step definitions. And, Frank also has a step definition compendium on their wiki.
I proposed that we merge iCuke & Frank (similar to how Merb & Rails merged) since they have the same common goal: Cucumber for iOS.
KIF (Keep It Functional) by Square
Zucchini Framework uses Cucumber syntax for writing tests and uses CoffeeScript for step definitions.
Additions
OCMock for mocking
OCHamcrest and/or Expecta for expectations
Conclusion
Well, obviously, there's no right answer to this question, but here's what I'm choosing to go with currently:
For unit testing, I used to use OCUnit/SenTestingKit in XCode 4. It's simple & solid. But, I prefer the language of BDD over TDD (Why is RSpec better than Test::Unit?) because our words create our world. So now, I use Kiwi with ARC & Kiwi code completion/autocompletion. I prefer Kiwi over Cedar because it's built on top of OCUnit and comes with RSpec-style matchers & mocks/stubs. UPDATE: I'm now looking into OCMock because, currently, Kiwi doesn't support stubbing toll-free bridged objects.
For acceptance testing, I use UI Automation because it's awesome. It lets you record each test case, making writing tests automatic. Also, Apple develops it, and so it has a promising future. It also works on the device and from Instruments, which allows for other cool features, like showing memory leaks. Unfortunately, with UI Automation, I don't know how to run Objective-C code, but with Frank & iCuke you can. So, I'll just test the lower-level Objective-C stuff with unit tests, or create UIButtons only for the TEST build configuration, which when clicked, will run Objective-C code.
Which solutions do you use?
Related Questions
Is there a BDD solution that presently works well with iOS4 and Xcode4?
SenTestingKit (integrated with XCode) versus GHUnit on XCode 4 for Unit Testing?
Testing asynchronous code on iOS with OCunit
SenTestingKit in Xcode 4: Asynchronous testing?
How does unit testing on the iPhone work?
tl;dr
At Pivotal we wrote Cedar because we use and love Rspec on our Ruby projects. Cedar isn't meant to replace or compete with OCUnit; it's meant to bring the possibility of BDD-style testing to Objective C, just as Rspec pioneered BDD-style testing in Ruby, but hasn't eliminated Test::Unit. Choosing one or the other is largely a matter of style preferences.
In some cases we designed Cedar to overcome some shortcomings in the way OCUnit works for us. Specifically, we wanted to be able to use the debugger in tests, to run tests from the command line and in CI builds, and get useful text output of test results. These things may be more or less useful to you.
Long answer
Deciding between two testing frameworks like Cedar and OCUnit (for example) comes down to two things: preferred style, and ease of use. I'll start with the style, because that's simply a matter of opinion and preference; ease of use tends to be a set of tradeoffs.
Style considerations transcend what technology or language you use. xUnit-style unit testing has been around for far longer than BDD-style testing, but the latter has rapidly gained in popularity, largely due to Rspec.
The primary advantage of xUnit-style testing is its simplicity, and wide adoption (amongst developers who write unit tests); nearly any language you could consider writing code in has an xUnit-style framework available.
BDD-style frameworks tend to have two main differences when compared to xUnit-style: how you structure the test (or specs), and the syntax for writing your assertions. For me, the structural difference is the main differentiator. xUnit tests are one-dimensional, with one setUp method for all tests in a given test class. The classes that we test, however, aren't one-dimensional; we often need to test actions in several different, potentially conflicting, contexts. For example, consider a simple ShoppingCart class, with an addItem: method (for the purposes of this answer I'll use Objective C syntax). The behavior of this method may differ when the cart is empty compared to when the cart contains other items; it may differ if the user has entered a discount code; it may differ if the specified item can't be shipped by the selected shipping method; etc. As these possible conditions intersect with one another you end up with a geometrically increasing number of possible contexts; in xUnit-style testing this often leads to a lot of methods with names like testAddItemWhenCartIsEmptyAndNoDiscountCodeAndShippingMethodApplies. The structure of BDD-style frameworks allows you to organize these conditions individually, which I find makes it easier to make sure I cover all cases, as well as easier to find, change, or add individual conditions. As an example, using Cedar syntax, the method above would look like this:
describe(#"ShoppingCart", ^{
describe(#"addItem:", ^{
describe(#"when the cart is empty", ^{
describe(#"with no discount code", ^{
describe(#"when the shipping method applies to the item", ^{
it(#"should add the item to the cart", ^{
...
});
it(#"should add the full price of the item to the overall price", ^{
...
});
});
describe(#"when the shipping method does not apply to the item", ^{
...
});
});
describe(#"with a discount code", ^{
...
});
});
describe(#"when the cart contains other items, ^{
...
});
});
});
In some cases you'll find contexts in that contain the same sets of assertions, which you can DRY up using shared example contexts.
The second main difference between BDD-style frameworks and xUnit-style frameworks, assertion (or "matcher") syntax, simply makes the style of the specs somewhat nicer; some people really like it, others don't.
That leads to the question of ease of use. In this case, each framework has its pros and cons:
OCUnit has been around much longer than Cedar, and is integrated directly into Xcode. This means it's simple to make a new test target, and, most of the time, getting tests up and running "just works." On the other hand, we found that in some cases, such as running on an iOS device, getting OCUnit tests to work was nigh impossible. Setting up Cedar specs takes some more work than OCUnit tests, since you have get the library and link against it yourself (never a trivial task in Xcode). We're working on making setup easier, and any suggestions are more than welcome.
OCUnit runs tests as part of the build. This means you don't need to run an executable to make your tests run; if any tests fail, your build fails. This makes the process of running tests one step simpler, and test output goes directly into your build output window which makes it easy to see. We chose to have Cedar specs build into an executable which you run separately for a few reasons:
We wanted to be able to use the debugger. You run Cedar specs just like you would run any other executable, so you can use the debugger in the same way.
We wanted easy console logging in tests. You can use NSLog() in OCUnit tests, but the output goes into the build window where you have to unfold the build step in order to read it.
We wanted easy to read test reporting, both on the command line and in Xcode. OCUnit results appear nicely in the build window in Xcode, but building from the command line (or as part of a CI process) results in test output intermingled with lots and lots of other build output. With separate build and run phases Cedar separates the output so the test output is easy to find. The default Cedar test runner copies the standard style of printing "." for each passing spec, "F" for failing specs, etc. Cedar also has the ability to use custom reporter objects, so you can have it output results any way you like, with a little effort.
OCUnit is the official unit testing framework for Objective C, and is supported by Apple. Apple has basically limitless resources, so if they want something done it will get done. And, after all, this is Apple's sandbox we're playing in. The flip side of that coin, however, is that Apple receives on the order of a bajillion support requests and bug reports each day. They're remarkably good about handling them all, but they may not be able to handle issues you report immediately, or at all. Cedar is much newer and less baked than OCUnit, but if you have questions or problems or suggestions send a message to the Cedar mailing list (cedar-discuss#googlegroups.com) and we'll do what we can to help you out. Also, feel free to fork the code from Github (github.com/pivotal/cedar) and add whatever you think is missing. We make our testing frameworks open source for a reason.
Running OCUnit tests on iOS devices can be difficult. Honestly, I haven't tried this for quite some time, so it may have gotten easier, but the last time I tried I simply couldn't get OCUnit tests for any UIKit functionality to work. When we wrote Cedar we made sure that we could test UIKit-dependent code both on the simulator and on devices.
Finally, we wrote Cedar for unit testing, which means it's not really comparable with projects like UISpec. It's been quite a while since I tried using UISpec, but I understood it to be focused primarily on programmatically driving the UI on an iOS device. We specifically decided not to try to have Cedar support these types of specs, since Apple was (at the time) about to announce UIAutomation.
I'm going to have to toss Frank into the acceptance testing mix. This is a fairly new addition but has worked excellent for me so far. Also, it is actually being actively worked on, unlike icuke and the others.
For test driven development, I like to use GHUnit, its a breeze to set up, and works great for debugging too.
Great List!
I found another interesting solution for UI testing iOS applications.
Zucchini Framework
It is based on UIAutomation.
The framework let you write screen centric scenarios in Cucumber like style.
The scenarios can be executed in Simulator and on device from a console (it is CI friendly).
The assertions are screenshot based. Sounds inflexible, but it gets you nice HTML report, with highlighted screen comparison and you can provide masks which define the regions you want to have pixel exact assertion.
Each screen has to be described in CoffeScript and the tool it self is written in ruby.
It is kind of polyglott nightmare, but the tool provides a nice abstraction for UIAutomation and when the screens are described it is manageable even for QA person.
I would choose iCuke for acceptance tests and Cedar for unit tests. UIAutomation is a step in the right direction for Apple, but the tools need better support for continuous integration; automatically running UIAutomation tests with Instruments is currently not possible, for example.
GHUnit is good for unit tests; for integration tests, I've used UISpec with some success (github fork here: https://github.com/drync/UISpec), but am looking forward to trying iCuke, since it promises to be a lightweight setup, and you can use the rails testing goodness, like RSpec and Cucumber.
I currently use specta for rspec like setups and it's partner (as mentioned above) expecta which has ton's of awesome matching options.
I happen to really like OCDSpec2 but I'm biased, I wrote OCDSpec and contribute to the second.
It's very fast even on iOS, in part because it's built from the ground up rather than being put on top of OCUnit. It has an RSpec/Jasmine syntax as well.
https://github.com/ericmeyer/ocdspec2