Regarding Android Appium automation tests - appium-android

Our Android app is very large and we need a good portion of our code to be covered via Automation testing and we use Appium for this. Most of our Appium tests exercise portions of code that call endpoints and hence are time consuming. To a question by user asking how to mock endpoints, a reply in Appium forum (https://discuss.appium.io/t/how-to-mock-api-backend-in-native-apps/4183) seems to suggest to use Appium for end to end tests only?
My question is how in industry Appium tests are written? By definition of test pyramid, we should write very few end to end tests. So are industry apps using Appium have very few such tests? Is it uncommon to try to mock endpoints when using Appium? If not, how to mock endpoints with appium, for e.g. using WireMock?
Regards,
Paddy

Appium is perfect framework for UI automation of mobile apps, but it is definitely not intended to give 100% test coverage of the product (that includes not only mobile/web clients, but back-end services, databases, etc.)
The good practise with Appium (same to WebDriver on web) is too write short quick tests as much as its possible, meaning generate/prepare/remove test data or application state via api calls/database interactions.
Appium does not force you to implement fully e2e tests: you can easily start appropriate Activity via Appium and continue test from that step, skipping bunch of previous steps covered in other test.
The common problem is when engineers are trying to build mobile automation testing on UI actions only relying on Appium, so that tests become time consuming, flaky and slow.
There is no much sense to start your UI tests suite if auth service is down, right? And the quick way to get those errors is to have good API/integration test coverage - here we are back to test pyramid ;)

Related

Which is the best approach for testing Flutter Apps

I'm working for a Flutter App which relies on an API. We are thinking about a testing strategy and we would like to know which should be the best approach.
According to their documentation ( https://flutter.dev/docs/testing ) they have 3 levels of tests:
Unit tests
Widget tests
Integration tests (Pump widgets new approach)
Integration tests (Flutter driver old approach)
As we have limited resources, we would like to know what we should pickup first. Since until now, very few effort was put on testing.
Our situation is the following:
Unit tests (50% coverage)
Widget tests (0% coverage)
Integration tests (Pump widgets new approach - 0% Coverage)
Integration tests (Flutter driver old approach - Only a few test scenarios covered, the main flows)
API Tests: 0% coverage on unit tests and functional tests
And we are not using any testing automation framework such as WebdriverIO + Appium.
We would like to know how much effort we should put in each of the Flutter test categories, and regarding Flutter integration tests, would it make sense to just have Integration tests with the new approach (Pumping every widget) or we would also need Integration tests (Flutter driver old way)?. Relying only on the integration testing using pump widget approach doesn't make us feel very confident.
Some options we are considering are:
Strong API coverage (unit test and functional test) + Strong coverage on Flutter unit tests + Few Integration tests using flutter driver approach
Testing pyramid approach : Lots of unit tests + Less amount integration tests using pump widget new approach ,API tests and Widget tests + Less amount of E2E tests (maybe using Integration tests using flutter driver approach or an external automation framework) and manual tests
Just unit test + Widget test + Integration tests the new approach of pumping widgets, trying to achieve 100% coverage in each of the three.
We also think that maintaining integration tests the new way (pumping widgets) is somehow very time consuming as you need to have good understanding of the views and the internals of the App. Which might be challenging for a QA Automation guy who hasn't got too much experience with Flutter development.
Which of the Flutter automated testing categories I should cover first, unit, widget or integration testing? Should I use an external automated framework such as WebdriverIO + Appium instead?
First, at this moment, I would suggest to think about testing in the application perspective, not on Flutter, React-native or Native perspective, well, test pyramid and test concepts are not really linked to any development tool/framework, at the end of the day the app must do what it's supposed to do gracefully.
Now, on the strategy topic, depends on a lot of variables, I will push just some to this answer, otherwise I will write an article here.
There is some stuff to think about, even before writing the strategy:
When we will test?
Local build and testing.
Remote build and testing (CI/CD stuff).
Pre merge testing (CI/CD stuff).
Pre production testing (CI/CD stuff).
Production monitoring tests (CI/CD stuff).
Do we have enough resources?
At least one person dedicated person for testing and it's tasks.
VMs/computers hosted by your company or cloud providers to run the tests in a CI/CD pipeline.
On my previous experiences with testing, when you are starting (low amount of coverage), end-to-end testing are the ones that did show more value, why?
It's mostly about the user perspective.
It will answer things like "Can the user even login on our app and perform a core task?" If you cannot answer this before a release, well you are in a fragile situation.
Covers how the application screens and feature behave together.
Covers how the application integrate with backend services.
Well, if it has issues with the API, it will most likely to be visible on the UI.
Covers if data is being persisted in a way that make sense to the user
It might be "wrong" on the database, but for who is using it still makes sense.
You don't need 500 hundred tests to have a nice coverage, but still this kind of test is costly to maintain.
The problem with the base (fast and less costly tests) of the pyramid when you have "nothing" is, you can have 50000 unit tests, but still not answer if the core works, why? For you to answer it you need to be exposed to a real, or near real world, unit doesn't provide it for you. You will be really limited to answer things like: "well in case the input is invalid it will show a fancy message. But can the user login?"
The base is still important and the test pyramid is still a really good thing to use for guidance, but my thoughts for you folks right now, as you are starting, try to get meaningful end-to-end cases, and make sure that they are working, that the core of the application, at every release, is there, working as expected, it's really good to release with confidence.
At some point the amount of end-to-end will increase, and you will start seeing the cost of maintaining it, so then you could start moving things down one step bellow in the pyramid, checks that were made on the e2e, now can be on the integration level, on the widget level, and etc.
Testing is a iterative and incremental work also, it will be changing as the team matures, trying to go strait to the near perfect world with it, will cause a lot of problematic releases, my overall point is, at first, try to have tests that gives meaningful answers.
Another note is: Starting in the top of the pyramid that is not supposed to be linked to any development framework (Flutter, react-native and etc) will also give you time to get up to speed into Flutter, while you are still contributing to e2e coverage, using things like Appium (SDETS/QA must have some familiarity with it) for example, could be a parallel work.

Intern or Protractor for Angular E2E Testing

Besides being able to leverage Angular locator methods, why would one use the Protractor testing framework instead of the Intern testing framework for Angular end to end testing?
Aside from AngularJS specific locators like by.model, by.repeater, protractor knows when the page is completely loaded, when Angular is settled down and ready - it makes the tests running naturally, there is usually no need in using explicit waits or introducing artificial delays in the testing code. In other words, it always works in sync with Angular:
You no longer need to add waits and sleeps to your test. Protractor
can automatically execute the next step in your test the moment the
webpage finishes pending tasks, so you don’t have to worry about
waiting for your test and webpage to sync.
Besides, protractor has a very convenient and rich API. It's not only wrapping WebdriverJS, but also extending it introducing new features on top. For instance, there are multiple functional programming functions available on an array of web elements, like map() or reduce(). I also like the way it allows to work with "repeaters" through rows and columns. Additionally, there is a nice Plugin API and a set of built-in plugins, like accessibility or timeline.
As a side bonus, there is a protractor-perf package that uses protractor and browser-perf for performance regression testing. You might even use your existing e2e tests as a base for performance tests wrapping the desired testing code blocks into perfRunner.start() and perfRunner.stop().
The big pro for Protractor is that it solve the asynchronicity problem by binding to AngularJS elements to check when the elements have finished loading. It also got an easyer to read syntax (if you come from a ruby background) and a lot more practical tutorial.
There is a more detailed comparrison between Intern and Protractor in this blog post here.

What is the official Sails teams recommendation for running tests?

Is there a library/framework for Sails testing?
I don't know if there are some similarities with rails in this regard. But rails has a testing framework by default. Does sails have the same?
I've heard of Jasmine. But wanted to know what the sails team recommends.
We don't officially recommend one testing framework over another; in general our only official policy is "testing is good and you should do it!". Any testing framework that works with Node (and especially Express) will be good for testing your Sails app.
That being said, the core Sails tests use Mocha. Examining the code to those core tests, especially certain integration tests, will give you some insight into how to test a Sails app. The biggest difference between the core integration tests and what you might see in a project-level test is that the core tests create a new app on the fly, while for a project you'd just be testing the code you have.
We're also toying around with automatic test generation, although it's safe to say it's in its infancy. Then again, this is open source, so who's to say when a hero might come along and make a valuable contribution!

How to test Eclipse plugins?

What would be the best means/tools for testing an Eclipse plugin? Are there some tools for testing the GUI features of an application created with Eclipse plugins?
You should try to keep GUI tests to a minimum since they are slow to run and takes time to create. If your code is well structured in a Model-view-controller pattern then the GUI specific code should be minimal.
Thats the theory in a perfect world atleast. Until we get there, I prefer to use SWTbot
Eclipse Jubula is a very nice tool for GUI based testing for Eclipse plugins. It provides automated functional GUI testing for various types of applications. It is aimed at teams who want their automated tests to be written by test experts from the user perspective, without requiring any coding effort. Jubula tests incorporate best practices from software development to ensure long-term maintainability of the automated tests.
It'll be useful to work with Jenkins as your CI build system.
You can connect to a real database as your wish for test result storage purposes.
You can save screenshots taken by Jubula upon test failures directly in the test report
I'd suggest you to take a look at RCP Testing Tool. This tool lets you to develop dozens of UI tests per day per engineer, and do not have stability and incorrect-recording problems. It's designed specially and only to test Eclipse-based apps. It is official Eclipse project and it's free.
After analyzing the possibilities, I have opted for using Jubula for testing the GUI part:
Eclipse Jubula Project
It's a good tool for creating tests specific for an RCP application. Moreover, it's not code based and it allows the creation of tests from a user point of view.

Automatic simulate user inputs for testing forms?

I usually made php forms and "try" to use "good practices" in them.
I'm concerned about the real safety and error-free of that forms and I want to do some tests simulating the customer behavior, and I do it manually, but I find that is a hard work specially when the form is large and I know that there are a lot of combinations that I can't test, so usually I find bugs in the production phase.
Is there a tool that do this? I listened about Selenium, did somebody use it in the way I need? Or how can I create my own test tools that simulate user inputs at random?
User inputs implies: not filling/checking all the fields, putting in invalid data, using differents setups (no javascript, browser versions, ...), SQL injections, and I don't know so more...
You'll need to consider a combination of approaches here: good test case design, data driving those tests with various input combinations, and an automation tool such as Selenium, WebDriver, Telerik's Test Studio (commercial tool I help promote), or some other automation tool.
Design your test cases such that you're focusing on groups of behavior (a successful path case, a case validating invalid input, a case validating protection against SQL injection, etc.). Then you can look to (perhaps) data drive those test cases with sets of inputs and expected results. You can randomize that as needed through your test case code.
Most good functional automation tools support multiple browsers running the same test script, so that's a good help for hitting multi-browser testing.
Above all, start your automation efforts with small steps and focus first on high-value tests. Don't spend time trying to automate everything because that costs you a lot of time.
Selenium is used to automate browsers in exactly the way you described.
Its used for what is called Functional Testing. Where you test the external aspects of an application to ensure that they meet the specifications.
Its is most often combined with unit tests that test the internal aspects. For example to test that your application is safe against different forms of SQL injection.
Each programming language usually has several different frameworks for writing unit tests.
This are often used together this with an approach called test driven development (TDD) where you write the tests before the application code.