How do I test-drive GWT development? - gwt

Just googling 'TDD' and 'GWT' easily lead to this article where the author explained how he can test a GWT application without a container. However, I think his example is not test-driven as he has all the design first and then write the test afterwards, not 'test-first'.
This leads me to think: Is it possible to have 'test-first' development on a UI like GWT? Some people said UI code is not suitable for TDD. But I think by adopting the MVC pattern, maybe we can at least test-drive the MC part? (so V is the UI part which cannot be developed test-driven).
What will be the first failing test we would write on the article example?

Test driving UI is problematic because you often don't know what you want on the screen until you see it on the screen. For that reason, GUI development tends to be massively iterative and therefore very difficult to drive with tests.
This does not mean that we just abandon TDD for GUIs. Rather, we push as much code as we possibly can out of the GUI, leaving behind only simple wiring code. That wiring allows us to make the massively iterative changes we need, without affecting the essence of the problem.
This technique was probably best described by Michael Feathers some years ago in an article entitled "The Humble Dialog Box". It is also the fundamental idea behind the Model-View-Presenter pattern that caused such a stir four years ago; and has now been split into the Passive View and Supervising Controller patterns. The article link in this question takes advantage of these ideas, but in a test-after rather than a test-driven way.
The idea is to test drive everything except the view. Indeed, we don't even need to write the view for a good long time. Indeed, the View is so absurdly simple that it probably doesn't need any kind of unit tests at all. Or if it does, they can in fact be written last.
To test drive the Supervising Controller you simply make sure you understand how the data will be presented on the screen. You don't need to know where the data is, or what the font is, or what color it is, or any of the other cosmetic issues that cause the massive iteration of GUIs. Rather, you know one data item will be some kind of text field. Another will be a menu, still another will be a button or a check box. And then you make sure that the View can ask all the questions it needs to ask to get these items rendered correctly.
For example the text box may have a default value. The View should be able to ask for it. The menu may have some items greyed-out. The View should be able to ask for this information. The questions that the view asks are all about presentation, and are devoid of business rules.
By the same token, the view will tell the Supervising Controller when anything changes. The controller will modify the data appropriately, including any kind of validation and error recovery, and then the View can ask how that data should be presented.
All of this can be test driven because it's all decoupled from the visual display. It's all about how the data is manipulated and presented, and not about what it looks like. So it doesn't need to be massively iterated.

I've successfully test-driven the development of Swing and GWT applications through the GUI.
Testing "just behind the GUI" ignores the integration between the model code and the GUI components. The application needs to hook up event handlers to display data in the GUI when the model changes and receive input from the GUI and update the model. Testing that all those event handlers have been hooked up correctly is very tedious if done manually.
The big problem to overcome when testing through the GUI is how to cope with changes to the GUI during development.
GWT has hooks to help with this. You need to set debug IDs on the GWT widgets and import the DebugID module into your application. Your tests can then interact with the application by controlling a web browser, finding elements by their id and clicking on them or entering text into them. Web Driver is a very good API for doing this.
That's only the start, however. You also need to decouple your tests from the structure of the GUI: how the user navigates through the UI to get work done. This is the case whether you test through the GUI or behind the GUI against the controller. If you test against the controller, the controller dictates the way that the user navigates through the application's different views, and so your test is coupled to that navigation structure because it is coupled to the controller.
To address this, our tests control the application through a hierarchy of "drivers". The tests interact with drivers that let it perform user-focused activities, such as logging in, entering an order, and making a payment. The driver captures the knowledge of how those tasks are performed by navigating around and entering data into the GUI. It does this by using lower-level drivers that capture how navigation and data entry is performed by "gestures", such as clicking on a button or entering text into an input field. You end up with a hierarchy like:
User Goals: the tests verify that the user can achieve their goals with the system and demonstrates how those goals are achieved by a sequence of...
User Activities: things the user does through the GUI, represented as drivers that perform...
Gestures: low level mouse and keyboard input to control the GUI.
This hierarchy that is often used in the user-centered design literature (although with different terminology).

Related

Isn't react-testing-library redundant with using a full render?

I have a question about react-testing-library. It seems like this is the go to testing library if you're doing hooks development since Enzyme doesn't seem to support hooks at this time and who knows if it will at least from the shallow rendering perspective... at least from what I've read at this time. So what is driving me a little crazy about react-testing-library is that it suggests doing full renders, firing clicks, changes, etc. to test your components. So what if you were to change the functionality of a Button component let's just say, are all the tests going to break that are using it? Doesn't it seem odd to render and run tests on every child component of that component when you're already testing that component? Are you expected to mock all those components inside a parent component? Doesn't it seem redundant to do clicks and changes if you're already doing that in automation testing such as using webdriver?
The idea is that you test 'mission critical' things in end to end testing.
These test rely on lots of features all working together.
The entire APP running and every single piece of functionality in-between working.
Because they rely on so many thing and take so long to develop and run you don't want to test every thing with an end to end test.
And if it breaks, where did it break? which piece of functionality is no longer working?
If you change the functionality of a button that was used in and end to end test it would fail - as it should. But say the end to end test fails and your integration/unit tests on the button also fail? You know straight away where your problem is.
And what if you refactor the button so that it still functions the same but the code implementing this is much cleaner? Then you should design your tests so that they still pass and this is actually where react-testing-library really shines.
You mimic how a use might interact with the component and what you expect the component to do - not what it's internal state is like you might in enzyme.
I'm not a professional developer though but that's my two cents.
You must take a look at the "Testing Trophy" philosophy that #kentcdodds talks about. - https://testingjavascript.com/
Like Michael mentions in the other answer, if you change the functionality of your Button components, your tests are expected to break. Tests are a clear translation of the business needs, so if needs change, your existing tests are supposed to break, so that the new ones may be incorporated.
On your point around doing automation testing instead, where I'm assuming you mean "end-to-end testing". This is different from the tests that react-testing-library suggests you to do. The philosophy asks you to write a good number of integration tests on your parent component, so that you can be sure that the way the parent component uses the child component is in harmony. It validates the configurations you made on the child component which are very specific to the behavior of this parent component, and hence the integration tests.

How to display a simple messsage using WPF and MVVM

I've look online at several different resources to find a simple example of commands inside C# using the MVVM and WPF. All I
want to know is how to display a message after a button has been clicked. I couldn't find anything this simple online so I'm
asking here.
It depends really on the nature of the message, from where it comes from and at when should it be displayed. While it's for sure an operation that is more view-related, the source may come from business logic that may merit being in the viewmodel for testing.
Generally speaking, I would choose one of two approaches:
Use normal dependency injection when building the view-model, and inject a service (via its interface) that will be in charge of displaying all messages. The real implementation may be simple as a call to MessageBox.Show or do more complicated things at the view level.
Give the view-model an event for it to raise, containing all the message data in its EventArgs parameter. The view will subscribe to that event and display the notification as it pleases.
In both cases, the view-model is unaware of view specific logic, while still able to encapsulate business logic for message generation, and being fully unit-testable.
Unfortunately, "simple" things such as displaying a message in a Window tend to violate MVVM from a purist standpoint, or require a fair amount of code to build a system where this works cleanly.
That being said, in most cases, showing a message is often a pure view concern. As such, this is something where I'll often just do it in code behind in the View. At first, this may seem like it violates MVVM, but as it's a "pure-View" related operation, it's not really a problem in practice.
You can do it in two ways
1. You can just put a message Box in the View-Model on Execute of the ICommand, when you click on the Button, Message box will pop up or
2. You can have Message on the hidden on the Xaml, and make it visible when you click on Button

Where in an Eclipse plugin to I place an algorithm that utilises several views

I've got an Eclipse plugin that contains 4 views. A piece of programming functionality must sit 'above' these views, tying them together. I'll call it the 'master'
Can anybody advise on the best location for this functionality? Really, I want the 'master' to start once the application is open and the views have been initialised.
In my generated RCP application plugin I have an Activitor, a Client, a Perspective, an ApplicationActionBarAdvisor, and ApplicationWorkbenchAdvisor, and an ApplicationWorkbenchWindowAdvisor. None of these seem suitable for hosting the 'master'.
Edit: after a little further investigation I suspect the ApplicationWindowAdvisor holds my answer. It has a number of methods that may be overridden to jump into application lifecycle stages. The ones that appear to relate to this problem appear to be: postStartup, postWindowOpen, postWindowCreate
I'd appreciate any pointers on which method is called after all the views have been created/initialised.
Edit 2: more googling has exposed use of the org.eclipse.ui.startup
extension point, as IStartup.earlyStartup() is also run after the Workbench
was completely started.
cheers,
Ian
Maybe you could define an OSGi service (see the tutorial by Lars Vogel for detailed howto: http://www.vogella.de/articles/OSGi/article.html).
This service could be initialized either declaratively, or by using your plug-in activator; then each view could connect to this service.
On the other hand, if you want to communicate between the views, you could simple use the workbench selection service - in this case, all views are operating somewhat independent of each other, without central control.
Edit Responding to the changes in the question: neither proposed methods have anything to do with the opening (or closing) of the views. The postStartup is executed after the application starts; postWindowOpen is executed after the window is opened; while postWindowCreate is opened after the window is created, but before its opened.
The earlyStartup() makes it possible to execute after the workbench was started, but it still does not makes sure whether the corresponding views are opened - views have a different life-cycle then windows.
Globally, you have to provide some common service that can be used by each of the views; this can registered at most points of the application lifecycle - you should choose the one that fits your need best.
I think you're mixing concepts. Algorithms don't work on views but on the models which the views show.
A view is nothing but a window which transforms bits in memory to something a user can find useful.
So what you really want is to separate the models in your views from the views themselves. That will also make testing much more simple. Have them publish events when they are change.
Your algorithm should subscribe to the events of all four models and do its work, putting the result in another model where the same or other views can pick it up.
In the views, also subscribe to the appropriate events which your models emit and update them accordingly.
This way, you can separate the model from the views. You won't get into trouble when the user rearranges views or closes them.
I think the best you can do is setup a Perspective with those views and lock them, so the user may not close them.
I do not remember exactly how you can do this, but I think setting the perspective as 'fixed' on the extension point declaration might do the trick.

UI First or logic first?

While working on projects, I often come to the dilemma of working on UI first or on logic first. Having UI first gives a nice overview of how the end product is going to look like whilehaving logic first uncovers any possible roadblocks in technology.
However, it is not always that crystal clear.. some times UI may need data to be populated to really show what it means and simulating the data could be more difficult than inplementing the logic.. what is your preferred approach for development and why? Which is more efficient and effective?
( I am seeing this issue more and more with iphone projects)
Neither.
You will have to so some thinking at the beginning of the project anyway, deciding the general approach you will take and what operations you will support.
Do that reasonably well and you will have defined the interface between the view and the underlying logic. Look at the Model-View-Controller approach for some inspiration.
What you want to have early on is an idea of what are the basic operations your logic code needs to do to in order to achieve a purpose. Usually it will be a simple function call but sometimes it may involve more than that. Have that clear first.
Next, a complex system that works is based on a simple system that works.
Which means you will need to have a basic UI you'll use to test a basic logic implementation first.
A simple form with a button which presents a message is basic enough. Then, it can grow, you implement a piece of functionality and then you add a simple UI you can test it with.
It is easier to do both piece by piece as the logic and UI for a small piece of the logic are conceptually similar and it will be easy to keep track of both while you implement and test.
The most important part is to keep the UI and logic decoupled, making them talk through a common interface. This will allow you to make quick edits when needed and improve the looks of the GUI towards the end.
You will be better able to just scrap the UI if you don't like it. All you'll need to do is use the same interface, one which you know how to do because you wrote it and you already implemented it.
If at some point you realize you've made a big mistake you can still salvage parts of the code, again because the UI and logic are decoupled and, hopefully, the logic part is also modular enough.
In short: think first, do both the UI and the logic in small increments and keep things modular.
Iterate between both (logic and UI). Back and forth. The UI may change as you understand how and what you can do with the logic and whatever constraints that may present. The logic may change as you change the features, behavior, or performance requirements of an easy-to-use, responsive, and decent looking UI.
I usually do the minimum possible of each until I have some barely working mockups. Then I use each mockup to test where my assumptions about the proper UI and/or logic might be right or wrong. Pick the most flawed, and start iterating.
Apple suggests mocking up the UI on paper first. (Have an eraser handy...)
If possible, in parallel.
But personally, I advocate logic first.
I start with the fundamentals first and that means getting the logic coded and working first. There are two reasons for this:
If I can't get the logic working correctly having a pretty UI is useless and a waste of my time
You will most likely change the UI when you work on the logic aspect making the UI process longer and more expensive
I usually get my UI in order first. The reason? As I prototype different designs, sometimes my idea for the app changes. If it does, then it's no consequence - there is no code to rewrite.
However, sometimes it is helpful to get the fundamentals first in order to determine if the app is going to work or not. If it's not going to function, then why waste time making interfaces?
I like to start by laying out the different parts of my project in something like Vizio.
I make boxes for the different views I expect to have, and I fill them with the information I expect them to contain.
I make another set of boxes for my expected model objects (logic). I fill them with the information I expect they will work with, and I draw lines between models and views where I think it will be necessary.
I do the same thing for object graphs (if I plan on using CoreData), and for database tables if I am going to have an external database.
Laying everything out visually helps me decide if I am missing any important features or interactions between project components. It also gives me something to quickly refer to if I later lose track of what I was doing.
From that point, I tend to work on a model until it has enough done to fill out part of a view, then I work on a view until it can interact with the model.
I also try to identify views or models that could be reused for multiple purposes so that I can reduce the overall amount of work I have to do.
Take the agile approach and work on small amounts of both in iterations. Slowly building each functional piece of the program so as to not build any monolithic piece at once.

Unit Testing is Wonderful, But

I took the time to set up some Unit Tests and set up the targets in XCode, etc., and they're pretty useful for a few classes. However:
I want to test small UI pieces for which I don't want to launch the entire application. There is no concept of pass/fail: I need to "see" the pieces, and I can make dummy instances of all the relevant classes to do this. My question is: how can I set this up in XCode?
I realize I could use another XCode project for each class (or groups of classes), but that seems a bit cumbersome. Another target for each?
I know that you're looking for an approach to testing UI components that doesn't require a fully functional application, but I've been impressed with what the new UI Automation instrument introduced in iOS 4.0 lets you do.
This instrument lets you use Javascript scripts to interactively test your application's interface, and it does so in a way that does not require checking exact pixel values or positions on a screen. It uses the built-in accessibility hooks present in the system for VoiceOver to identify and interact with components.
Using this instrument, I have been able to script tests that fully exercise my application as a user would interact with it, as well as ones that hammer on particular areas and look for subtle memory buildups.
The documentation on this part of Instruments is a little sparse, but I recently taught a class covering the subject for which the video is available on iTunes U for free (look for the Testing class in the Fall semester). My course notes (in VoodooPad format) cover this as well. I also highly recommend watching the WWDC 2010 video session 306 - "Automating User Interface Testing with Instruments".
Well, you cannot call showing a piece of some GUI a testing even if that GUI is a part of a large application. What you can do here is create a separate executable target and write a small tool that reuses GUI components from your application and shows them to you basing on input parameters. This will eliminate the need for many many different targets.
If you still insist on using unit tests, you can show your GUI for some period of time, for example, 10 seconds. So the test case will run until GUI is closed or timeout elapses and each test will take up to N seconds to execute.
This is a good question. I think you actually do not want to use unit tests for those 'visual confirmations'. Personally I usually write little test apps to do this kind of testing or development. I don't like separate targets in the same project so I usually just create a test project next to the original one and then reference those classes and resources using relative paths. Less clutter. And it is really nice to be able to test more complex user interface elements in their own little test environment.
I would take a two-level approach to UI "unit testing":
lthough Cocoa/CocoaTouch are still closer to the Model-View-Controller than the Model-View-ViewModel paradigm, you can gain much of the testability advantage by breaking your "View" into a "view model" and a "presenter" view (note that this is somewhat along the lines of the NSView/NSCell pair; Cocoa engineers had this one a long time ago). If the view is a simple presentation layer, than you can test behavior of the view by unit testing the "view model".
To test the drawing/rendering of your views, you will have to either do human testing or do rendering/pixel-based tests. Google's Toolbox for Mac has several tools for doing pixel-by-pixel comparison of rendered NSViews, CALayers, UIViews, etc. I've written a tool for the Core Plot project to make dealing with the test failures and merging the reference files back into your unit test bundle a little easier.