Isn't react-testing-library redundant with using a full render? - react-testing-library

I have a question about react-testing-library. It seems like this is the go to testing library if you're doing hooks development since Enzyme doesn't seem to support hooks at this time and who knows if it will at least from the shallow rendering perspective... at least from what I've read at this time. So what is driving me a little crazy about react-testing-library is that it suggests doing full renders, firing clicks, changes, etc. to test your components. So what if you were to change the functionality of a Button component let's just say, are all the tests going to break that are using it? Doesn't it seem odd to render and run tests on every child component of that component when you're already testing that component? Are you expected to mock all those components inside a parent component? Doesn't it seem redundant to do clicks and changes if you're already doing that in automation testing such as using webdriver?

The idea is that you test 'mission critical' things in end to end testing.
These test rely on lots of features all working together.
The entire APP running and every single piece of functionality in-between working.
Because they rely on so many thing and take so long to develop and run you don't want to test every thing with an end to end test.
And if it breaks, where did it break? which piece of functionality is no longer working?
If you change the functionality of a button that was used in and end to end test it would fail - as it should. But say the end to end test fails and your integration/unit tests on the button also fail? You know straight away where your problem is.
And what if you refactor the button so that it still functions the same but the code implementing this is much cleaner? Then you should design your tests so that they still pass and this is actually where react-testing-library really shines.
You mimic how a use might interact with the component and what you expect the component to do - not what it's internal state is like you might in enzyme.
I'm not a professional developer though but that's my two cents.

You must take a look at the "Testing Trophy" philosophy that #kentcdodds talks about. - https://testingjavascript.com/
Like Michael mentions in the other answer, if you change the functionality of your Button components, your tests are expected to break. Tests are a clear translation of the business needs, so if needs change, your existing tests are supposed to break, so that the new ones may be incorporated.
On your point around doing automation testing instead, where I'm assuming you mean "end-to-end testing". This is different from the tests that react-testing-library suggests you to do. The philosophy asks you to write a good number of integration tests on your parent component, so that you can be sure that the way the parent component uses the child component is in harmony. It validates the configurations you made on the child component which are very specific to the behavior of this parent component, and hence the integration tests.

Related

Is there a way to import common feature file to another feature file in cucumber

Is there a way to import one cucumber feature file to another? so that I can move my repeated logics/actions/business validations for different flow to a common feature file.
Note: am using the background option effectively for few things like launching the application in every feature file. if consider it, even that background is also duplicated. :)
Many Thanks.
There is no way to include one feature file in another.
If you could, then Gherkin could be considered to be a programming language. Gherkin isn't a programming language and thus lacks features like functions or modules.
What can you do about your repeated backgrounds then? My approach would probably be to see if I could move the common initialization you do in the background down the stack. I would see if I could implement some helpers that would perform the same steps and then either minimize the background to something like
Given the world is prepared
in a background. Or just make sure that the preparation was done first in the scenarios that needed it. Maybe even hide it so the call is done in the first step. This would essentially move the background away from the feature file and hide it from for your business stakeholders.
One thing to consider tho would be, is the background important for your business stakeholders? Do they care about the backgrounds or is it just noise for them? If it is important, then don't hide the backgrounds. If the backgrounds isn't important, then hide them as much as possible.
You can deal with this by abstraction and background.
Given any number of prerequisites, if you group them together and give them a name you can call them in one step. In the cucumber space its far more effective to do this rather than import.
BDD is all about working at different levels of abstraction. In particular when you create some specific behaviour you work at a detailed level. Once the behaviour is created you then use that behaviour in a more abstract way.
Importing just gives you access to the detail. This is a bad thing because you end up working with different levels of abstraction in the same place which is both dangerous and confusing.

UI First or logic first?

While working on projects, I often come to the dilemma of working on UI first or on logic first. Having UI first gives a nice overview of how the end product is going to look like whilehaving logic first uncovers any possible roadblocks in technology.
However, it is not always that crystal clear.. some times UI may need data to be populated to really show what it means and simulating the data could be more difficult than inplementing the logic.. what is your preferred approach for development and why? Which is more efficient and effective?
( I am seeing this issue more and more with iphone projects)
Neither.
You will have to so some thinking at the beginning of the project anyway, deciding the general approach you will take and what operations you will support.
Do that reasonably well and you will have defined the interface between the view and the underlying logic. Look at the Model-View-Controller approach for some inspiration.
What you want to have early on is an idea of what are the basic operations your logic code needs to do to in order to achieve a purpose. Usually it will be a simple function call but sometimes it may involve more than that. Have that clear first.
Next, a complex system that works is based on a simple system that works.
Which means you will need to have a basic UI you'll use to test a basic logic implementation first.
A simple form with a button which presents a message is basic enough. Then, it can grow, you implement a piece of functionality and then you add a simple UI you can test it with.
It is easier to do both piece by piece as the logic and UI for a small piece of the logic are conceptually similar and it will be easy to keep track of both while you implement and test.
The most important part is to keep the UI and logic decoupled, making them talk through a common interface. This will allow you to make quick edits when needed and improve the looks of the GUI towards the end.
You will be better able to just scrap the UI if you don't like it. All you'll need to do is use the same interface, one which you know how to do because you wrote it and you already implemented it.
If at some point you realize you've made a big mistake you can still salvage parts of the code, again because the UI and logic are decoupled and, hopefully, the logic part is also modular enough.
In short: think first, do both the UI and the logic in small increments and keep things modular.
Iterate between both (logic and UI). Back and forth. The UI may change as you understand how and what you can do with the logic and whatever constraints that may present. The logic may change as you change the features, behavior, or performance requirements of an easy-to-use, responsive, and decent looking UI.
I usually do the minimum possible of each until I have some barely working mockups. Then I use each mockup to test where my assumptions about the proper UI and/or logic might be right or wrong. Pick the most flawed, and start iterating.
Apple suggests mocking up the UI on paper first. (Have an eraser handy...)
If possible, in parallel.
But personally, I advocate logic first.
I start with the fundamentals first and that means getting the logic coded and working first. There are two reasons for this:
If I can't get the logic working correctly having a pretty UI is useless and a waste of my time
You will most likely change the UI when you work on the logic aspect making the UI process longer and more expensive
I usually get my UI in order first. The reason? As I prototype different designs, sometimes my idea for the app changes. If it does, then it's no consequence - there is no code to rewrite.
However, sometimes it is helpful to get the fundamentals first in order to determine if the app is going to work or not. If it's not going to function, then why waste time making interfaces?
I like to start by laying out the different parts of my project in something like Vizio.
I make boxes for the different views I expect to have, and I fill them with the information I expect them to contain.
I make another set of boxes for my expected model objects (logic). I fill them with the information I expect they will work with, and I draw lines between models and views where I think it will be necessary.
I do the same thing for object graphs (if I plan on using CoreData), and for database tables if I am going to have an external database.
Laying everything out visually helps me decide if I am missing any important features or interactions between project components. It also gives me something to quickly refer to if I later lose track of what I was doing.
From that point, I tend to work on a model until it has enough done to fill out part of a view, then I work on a view until it can interact with the model.
I also try to identify views or models that could be reused for multiple purposes so that I can reduce the overall amount of work I have to do.
Take the agile approach and work on small amounts of both in iterations. Slowly building each functional piece of the program so as to not build any monolithic piece at once.

iphone - UI Unit Testing?

When it comes to writing unit testing for UI what do you write test for?
Do you test each method? (EX: a method returns the correct data)
Or do you test the functionalities? (Making sure that the table populates the data it suppose to)
Do I need to mock everything except the item I am testing? Let's say I am testing to make sure a table view populates correctly? Do i mock everything else?
Please provide as much details as possibe
I'll try to answer this in a general way.
When testing UI-ish code it's often a good idea to target the tests "one step away" from the UI itself. Ex. run against the models instead of the UI itself if possible. It's much less brittle this way. I'm not familiar with iOS UI test automation but these sort of things tend to break upon the smallest layout changes etc.
I'll suggest you take a look at FoneMonkey by Gorilla Logic. They have a very nice utility for writing unit test which actually tests from the users perspective, aka. check that the UI is as it should be, ie. loads correctly, contains the correct values, etc.
You can even run it in a faceless environment, eg. Continuous Integration server, etc.

Unit Testing is Wonderful, But

I took the time to set up some Unit Tests and set up the targets in XCode, etc., and they're pretty useful for a few classes. However:
I want to test small UI pieces for which I don't want to launch the entire application. There is no concept of pass/fail: I need to "see" the pieces, and I can make dummy instances of all the relevant classes to do this. My question is: how can I set this up in XCode?
I realize I could use another XCode project for each class (or groups of classes), but that seems a bit cumbersome. Another target for each?
I know that you're looking for an approach to testing UI components that doesn't require a fully functional application, but I've been impressed with what the new UI Automation instrument introduced in iOS 4.0 lets you do.
This instrument lets you use Javascript scripts to interactively test your application's interface, and it does so in a way that does not require checking exact pixel values or positions on a screen. It uses the built-in accessibility hooks present in the system for VoiceOver to identify and interact with components.
Using this instrument, I have been able to script tests that fully exercise my application as a user would interact with it, as well as ones that hammer on particular areas and look for subtle memory buildups.
The documentation on this part of Instruments is a little sparse, but I recently taught a class covering the subject for which the video is available on iTunes U for free (look for the Testing class in the Fall semester). My course notes (in VoodooPad format) cover this as well. I also highly recommend watching the WWDC 2010 video session 306 - "Automating User Interface Testing with Instruments".
Well, you cannot call showing a piece of some GUI a testing even if that GUI is a part of a large application. What you can do here is create a separate executable target and write a small tool that reuses GUI components from your application and shows them to you basing on input parameters. This will eliminate the need for many many different targets.
If you still insist on using unit tests, you can show your GUI for some period of time, for example, 10 seconds. So the test case will run until GUI is closed or timeout elapses and each test will take up to N seconds to execute.
This is a good question. I think you actually do not want to use unit tests for those 'visual confirmations'. Personally I usually write little test apps to do this kind of testing or development. I don't like separate targets in the same project so I usually just create a test project next to the original one and then reference those classes and resources using relative paths. Less clutter. And it is really nice to be able to test more complex user interface elements in their own little test environment.
I would take a two-level approach to UI "unit testing":
lthough Cocoa/CocoaTouch are still closer to the Model-View-Controller than the Model-View-ViewModel paradigm, you can gain much of the testability advantage by breaking your "View" into a "view model" and a "presenter" view (note that this is somewhat along the lines of the NSView/NSCell pair; Cocoa engineers had this one a long time ago). If the view is a simple presentation layer, than you can test behavior of the view by unit testing the "view model".
To test the drawing/rendering of your views, you will have to either do human testing or do rendering/pixel-based tests. Google's Toolbox for Mac has several tools for doing pixel-by-pixel comparison of rendered NSViews, CALayers, UIViews, etc. I've written a tool for the Core Plot project to make dealing with the test failures and merging the reference files back into your unit test bundle a little easier.

How do I test-drive GWT development?

Just googling 'TDD' and 'GWT' easily lead to this article where the author explained how he can test a GWT application without a container. However, I think his example is not test-driven as he has all the design first and then write the test afterwards, not 'test-first'.
This leads me to think: Is it possible to have 'test-first' development on a UI like GWT? Some people said UI code is not suitable for TDD. But I think by adopting the MVC pattern, maybe we can at least test-drive the MC part? (so V is the UI part which cannot be developed test-driven).
What will be the first failing test we would write on the article example?
Test driving UI is problematic because you often don't know what you want on the screen until you see it on the screen. For that reason, GUI development tends to be massively iterative and therefore very difficult to drive with tests.
This does not mean that we just abandon TDD for GUIs. Rather, we push as much code as we possibly can out of the GUI, leaving behind only simple wiring code. That wiring allows us to make the massively iterative changes we need, without affecting the essence of the problem.
This technique was probably best described by Michael Feathers some years ago in an article entitled "The Humble Dialog Box". It is also the fundamental idea behind the Model-View-Presenter pattern that caused such a stir four years ago; and has now been split into the Passive View and Supervising Controller patterns. The article link in this question takes advantage of these ideas, but in a test-after rather than a test-driven way.
The idea is to test drive everything except the view. Indeed, we don't even need to write the view for a good long time. Indeed, the View is so absurdly simple that it probably doesn't need any kind of unit tests at all. Or if it does, they can in fact be written last.
To test drive the Supervising Controller you simply make sure you understand how the data will be presented on the screen. You don't need to know where the data is, or what the font is, or what color it is, or any of the other cosmetic issues that cause the massive iteration of GUIs. Rather, you know one data item will be some kind of text field. Another will be a menu, still another will be a button or a check box. And then you make sure that the View can ask all the questions it needs to ask to get these items rendered correctly.
For example the text box may have a default value. The View should be able to ask for it. The menu may have some items greyed-out. The View should be able to ask for this information. The questions that the view asks are all about presentation, and are devoid of business rules.
By the same token, the view will tell the Supervising Controller when anything changes. The controller will modify the data appropriately, including any kind of validation and error recovery, and then the View can ask how that data should be presented.
All of this can be test driven because it's all decoupled from the visual display. It's all about how the data is manipulated and presented, and not about what it looks like. So it doesn't need to be massively iterated.
I've successfully test-driven the development of Swing and GWT applications through the GUI.
Testing "just behind the GUI" ignores the integration between the model code and the GUI components. The application needs to hook up event handlers to display data in the GUI when the model changes and receive input from the GUI and update the model. Testing that all those event handlers have been hooked up correctly is very tedious if done manually.
The big problem to overcome when testing through the GUI is how to cope with changes to the GUI during development.
GWT has hooks to help with this. You need to set debug IDs on the GWT widgets and import the DebugID module into your application. Your tests can then interact with the application by controlling a web browser, finding elements by their id and clicking on them or entering text into them. Web Driver is a very good API for doing this.
That's only the start, however. You also need to decouple your tests from the structure of the GUI: how the user navigates through the UI to get work done. This is the case whether you test through the GUI or behind the GUI against the controller. If you test against the controller, the controller dictates the way that the user navigates through the application's different views, and so your test is coupled to that navigation structure because it is coupled to the controller.
To address this, our tests control the application through a hierarchy of "drivers". The tests interact with drivers that let it perform user-focused activities, such as logging in, entering an order, and making a payment. The driver captures the knowledge of how those tasks are performed by navigating around and entering data into the GUI. It does this by using lower-level drivers that capture how navigation and data entry is performed by "gestures", such as clicking on a button or entering text into an input field. You end up with a hierarchy like:
User Goals: the tests verify that the user can achieve their goals with the system and demonstrates how those goals are achieved by a sequence of...
User Activities: things the user does through the GUI, represented as drivers that perform...
Gestures: low level mouse and keyboard input to control the GUI.
This hierarchy that is often used in the user-centered design literature (although with different terminology).