Intern or Protractor for Angular E2E Testing - protractor

Besides being able to leverage Angular locator methods, why would one use the Protractor testing framework instead of the Intern testing framework for Angular end to end testing?

Aside from AngularJS specific locators like by.model, by.repeater, protractor knows when the page is completely loaded, when Angular is settled down and ready - it makes the tests running naturally, there is usually no need in using explicit waits or introducing artificial delays in the testing code. In other words, it always works in sync with Angular:
You no longer need to add waits and sleeps to your test. Protractor
can automatically execute the next step in your test the moment the
webpage finishes pending tasks, so you don’t have to worry about
waiting for your test and webpage to sync.
Besides, protractor has a very convenient and rich API. It's not only wrapping WebdriverJS, but also extending it introducing new features on top. For instance, there are multiple functional programming functions available on an array of web elements, like map() or reduce(). I also like the way it allows to work with "repeaters" through rows and columns. Additionally, there is a nice Plugin API and a set of built-in plugins, like accessibility or timeline.
As a side bonus, there is a protractor-perf package that uses protractor and browser-perf for performance regression testing. You might even use your existing e2e tests as a base for performance tests wrapping the desired testing code blocks into perfRunner.start() and perfRunner.stop().

The big pro for Protractor is that it solve the asynchronicity problem by binding to AngularJS elements to check when the elements have finished loading. It also got an easyer to read syntax (if you come from a ruby background) and a lot more practical tutorial.
There is a more detailed comparrison between Intern and Protractor in this blog post here.

Related

Which is the best approach for testing Flutter Apps

I'm working for a Flutter App which relies on an API. We are thinking about a testing strategy and we would like to know which should be the best approach.
According to their documentation ( https://flutter.dev/docs/testing ) they have 3 levels of tests:
Unit tests
Widget tests
Integration tests (Pump widgets new approach)
Integration tests (Flutter driver old approach)
As we have limited resources, we would like to know what we should pickup first. Since until now, very few effort was put on testing.
Our situation is the following:
Unit tests (50% coverage)
Widget tests (0% coverage)
Integration tests (Pump widgets new approach - 0% Coverage)
Integration tests (Flutter driver old approach - Only a few test scenarios covered, the main flows)
API Tests: 0% coverage on unit tests and functional tests
And we are not using any testing automation framework such as WebdriverIO + Appium.
We would like to know how much effort we should put in each of the Flutter test categories, and regarding Flutter integration tests, would it make sense to just have Integration tests with the new approach (Pumping every widget) or we would also need Integration tests (Flutter driver old way)?. Relying only on the integration testing using pump widget approach doesn't make us feel very confident.
Some options we are considering are:
Strong API coverage (unit test and functional test) + Strong coverage on Flutter unit tests + Few Integration tests using flutter driver approach
Testing pyramid approach : Lots of unit tests + Less amount integration tests using pump widget new approach ,API tests and Widget tests + Less amount of E2E tests (maybe using Integration tests using flutter driver approach or an external automation framework) and manual tests
Just unit test + Widget test + Integration tests the new approach of pumping widgets, trying to achieve 100% coverage in each of the three.
We also think that maintaining integration tests the new way (pumping widgets) is somehow very time consuming as you need to have good understanding of the views and the internals of the App. Which might be challenging for a QA Automation guy who hasn't got too much experience with Flutter development.
Which of the Flutter automated testing categories I should cover first, unit, widget or integration testing? Should I use an external automated framework such as WebdriverIO + Appium instead?
First, at this moment, I would suggest to think about testing in the application perspective, not on Flutter, React-native or Native perspective, well, test pyramid and test concepts are not really linked to any development tool/framework, at the end of the day the app must do what it's supposed to do gracefully.
Now, on the strategy topic, depends on a lot of variables, I will push just some to this answer, otherwise I will write an article here.
There is some stuff to think about, even before writing the strategy:
When we will test?
Local build and testing.
Remote build and testing (CI/CD stuff).
Pre merge testing (CI/CD stuff).
Pre production testing (CI/CD stuff).
Production monitoring tests (CI/CD stuff).
Do we have enough resources?
At least one person dedicated person for testing and it's tasks.
VMs/computers hosted by your company or cloud providers to run the tests in a CI/CD pipeline.
On my previous experiences with testing, when you are starting (low amount of coverage), end-to-end testing are the ones that did show more value, why?
It's mostly about the user perspective.
It will answer things like "Can the user even login on our app and perform a core task?" If you cannot answer this before a release, well you are in a fragile situation.
Covers how the application screens and feature behave together.
Covers how the application integrate with backend services.
Well, if it has issues with the API, it will most likely to be visible on the UI.
Covers if data is being persisted in a way that make sense to the user
It might be "wrong" on the database, but for who is using it still makes sense.
You don't need 500 hundred tests to have a nice coverage, but still this kind of test is costly to maintain.
The problem with the base (fast and less costly tests) of the pyramid when you have "nothing" is, you can have 50000 unit tests, but still not answer if the core works, why? For you to answer it you need to be exposed to a real, or near real world, unit doesn't provide it for you. You will be really limited to answer things like: "well in case the input is invalid it will show a fancy message. But can the user login?"
The base is still important and the test pyramid is still a really good thing to use for guidance, but my thoughts for you folks right now, as you are starting, try to get meaningful end-to-end cases, and make sure that they are working, that the core of the application, at every release, is there, working as expected, it's really good to release with confidence.
At some point the amount of end-to-end will increase, and you will start seeing the cost of maintaining it, so then you could start moving things down one step bellow in the pyramid, checks that were made on the e2e, now can be on the integration level, on the widget level, and etc.
Testing is a iterative and incremental work also, it will be changing as the team matures, trying to go strait to the near perfect world with it, will cause a lot of problematic releases, my overall point is, at first, try to have tests that gives meaningful answers.
Another note is: Starting in the top of the pyramid that is not supposed to be linked to any development framework (Flutter, react-native and etc) will also give you time to get up to speed into Flutter, while you are still contributing to e2e coverage, using things like Appium (SDETS/QA must have some familiarity with it) for example, could be a parallel work.

Isn't react-testing-library redundant with using a full render?

I have a question about react-testing-library. It seems like this is the go to testing library if you're doing hooks development since Enzyme doesn't seem to support hooks at this time and who knows if it will at least from the shallow rendering perspective... at least from what I've read at this time. So what is driving me a little crazy about react-testing-library is that it suggests doing full renders, firing clicks, changes, etc. to test your components. So what if you were to change the functionality of a Button component let's just say, are all the tests going to break that are using it? Doesn't it seem odd to render and run tests on every child component of that component when you're already testing that component? Are you expected to mock all those components inside a parent component? Doesn't it seem redundant to do clicks and changes if you're already doing that in automation testing such as using webdriver?
The idea is that you test 'mission critical' things in end to end testing.
These test rely on lots of features all working together.
The entire APP running and every single piece of functionality in-between working.
Because they rely on so many thing and take so long to develop and run you don't want to test every thing with an end to end test.
And if it breaks, where did it break? which piece of functionality is no longer working?
If you change the functionality of a button that was used in and end to end test it would fail - as it should. But say the end to end test fails and your integration/unit tests on the button also fail? You know straight away where your problem is.
And what if you refactor the button so that it still functions the same but the code implementing this is much cleaner? Then you should design your tests so that they still pass and this is actually where react-testing-library really shines.
You mimic how a use might interact with the component and what you expect the component to do - not what it's internal state is like you might in enzyme.
I'm not a professional developer though but that's my two cents.
You must take a look at the "Testing Trophy" philosophy that #kentcdodds talks about. - https://testingjavascript.com/
Like Michael mentions in the other answer, if you change the functionality of your Button components, your tests are expected to break. Tests are a clear translation of the business needs, so if needs change, your existing tests are supposed to break, so that the new ones may be incorporated.
On your point around doing automation testing instead, where I'm assuming you mean "end-to-end testing". This is different from the tests that react-testing-library suggests you to do. The philosophy asks you to write a good number of integration tests on your parent component, so that you can be sure that the way the parent component uses the child component is in harmony. It validates the configurations you made on the child component which are very specific to the behavior of this parent component, and hence the integration tests.

Automatic simulate user inputs for testing forms?

I usually made php forms and "try" to use "good practices" in them.
I'm concerned about the real safety and error-free of that forms and I want to do some tests simulating the customer behavior, and I do it manually, but I find that is a hard work specially when the form is large and I know that there are a lot of combinations that I can't test, so usually I find bugs in the production phase.
Is there a tool that do this? I listened about Selenium, did somebody use it in the way I need? Or how can I create my own test tools that simulate user inputs at random?
User inputs implies: not filling/checking all the fields, putting in invalid data, using differents setups (no javascript, browser versions, ...), SQL injections, and I don't know so more...
You'll need to consider a combination of approaches here: good test case design, data driving those tests with various input combinations, and an automation tool such as Selenium, WebDriver, Telerik's Test Studio (commercial tool I help promote), or some other automation tool.
Design your test cases such that you're focusing on groups of behavior (a successful path case, a case validating invalid input, a case validating protection against SQL injection, etc.). Then you can look to (perhaps) data drive those test cases with sets of inputs and expected results. You can randomize that as needed through your test case code.
Most good functional automation tools support multiple browsers running the same test script, so that's a good help for hitting multi-browser testing.
Above all, start your automation efforts with small steps and focus first on high-value tests. Don't spend time trying to automate everything because that costs you a lot of time.
Selenium is used to automate browsers in exactly the way you described.
Its used for what is called Functional Testing. Where you test the external aspects of an application to ensure that they meet the specifications.
Its is most often combined with unit tests that test the internal aspects. For example to test that your application is safe against different forms of SQL injection.
Each programming language usually has several different frameworks for writing unit tests.
This are often used together this with an approach called test driven development (TDD) where you write the tests before the application code.

How reliable is HtmlUnitDriver?

Obviously, the answer to the question depends on a number of environmental factors.
In general, I'm wondering what people's experiences are with HtmlUnitDriver as a reliable tool that can be "trusted" to navigate a website basically the same way other browsers do.
Of course, I realize "the way other browsers do" is pretty nebulous; naturally every browser will have its quirks. But I am on a project where we have hundreds of acceptance test scenarios (written in JBehave) and using FirefoxDriver and InternetExplorerDriver, running all of them takes over two hours, which is kind of rough from a continuous integration standpoint. So I'm wondering if it's at least feasible that we could switch our acceptance tests over to use HtmlUnitDriver and expect much faster times with mostly the same behavior (and perhaps we could expect a handful of tests to fail using HtmlUnitDriver and specifically run those tests with a browser-based driver).
Our UI uses GWT, which may or may not complicate things (I don't know).
Basically, in others' experience, does HtmlUnitDriver operate about as well as another browser, or is it really only appropriate for very simple HTML websites with minimal JavaScript and should not be used for an enterprise web application?
From my experiences with using HtmlUnitDriver I would say that if you don't use it as your baseline browser when writing your tests then converting them to use it becomes a bit of a nightmare. This is especially true when it comes to javascript heavy sites.
The main reason for this is the obvious underlying use of htmlunit which, by default, uses the Rhino javascript engine. In the past I've always had to specify that HtmlUnitDriver start htmlunit using Firefox's javascript engine. This, for the most part, solved the javascript issues I was finding while running tests using HtmlUnitDriver.
One of the biggest issues I faced when it came to using the same test code for each browser was if, on the site under test, the UI developers had assigned javascript events such as onClick() to html elements such as a <span>.
The reason for this is that if you were to use WebDriver's .click() method on a WebElement representing the <span>, then htmlunit would not do anything (it expects an onClick() to be called on elements such as an <input>).
To get around this I had to manually call a click() event in javascript. You can do this either by using WebDriver's JavascriptExecutor or by using a WebDriverBackedSelenium and Selenium's .fireEvent() method.
So if your site uses such events then I'd say switching to use HtmlUnitDriver could be a big task.
Despite this, I actually use HtmlUnitDriver for all my tests. However, I went through the pains of discovering all of the above a while back, so now use HtmlUnitDriver as my baseline browser when writing tests.

GUI Automation testing - Window handle questions

Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.