How to easily debug flutter end-to-end tests, such as time traveling, action logs, and screenshots? - flutter

When doing end-to-end testing for Flutter, I find it very inconvenient to debug them. For example, for an e2e test that taps, drags, and asserts a ton of things, when it fails, I cannot know easily what indeed causes the failure. It may be caused by misbehavior that happens 10 steps ago.
Thus, I hope I can have the well-known time traveling functionality for Flutter tests (or, action logs, or screenshots for every step). In other words, with a button tap I can see "what did the UI look like when that button was tapped 50 steps ago?" Then I can go through the history and easily spot what goes wrong.
Is it possible to implement it? Can I integrate it into integration_test-based tests or do I have to create a brand new framework?

Here it goes: https://github.com/fzyzcjy/flutter_convenient_test - Write and debug tests easily, with full action history, time travel, screenshots, rapid re-execution, video records, interactivity, isolation and more. (With a video demo showing GUI: https://github.com/fzyzcjy/flutter_convenient_test#-quick-demo)
The implementation can be seen from the code. Shortly speaking, when actions like "tap" or "expect widget exists" are detected, some logs are created, and screenshots are automatically generated. Later, they can be displayed in a nice GUI.
It is compatible with integration_test, since we still make use of that framework and only adds automatic logging and screenshoting to it.
(Disclaimer: This is a QA style question, such that people who need it can know there already exists a library and no need to reinvent the wheel, and I am the author of the open-source library)

Related

Google Smart Home - Report State Real Time UI update

I have a question regarding report state and live updating in the app.
When I report a state from my server I expect to see changes in my thermostat without going to the main screen of the app and back into the thermostat. Now I have read many similar questions about this and I understand the app doesn't support updating the UI in real time with report state.
I also followed the codelabs tutorial on implementing a smart home action (https://codelabs.developers.google.com/codelabs/smarthome-washer/#0). With this implementation the UI updates as soon as report state is called, which is what I would expect.
Essentialy what I have done is just modify the codelabs example to work with express, and changed the washer to a thermostat. Also the report state returns status code 200.
So how come the UI is updated when using the demo implementation from codelabs, but not when I use my implementation? The code from codelabs runs on firebase while mine runs on an express instance on my laptop, maybe that's the problem?
Google Home App depends on various components for updating the state in the UI, including query and report state responses. You should make sure that you are able to respond to queries successfully as well as doing report state.
You can make sure both your report state and query responses are functional by using Test Suite. In case your report states are shown as having errors there, you can also use the Home Graph Viewer to see the states of your devices in Homegraph.

Best method to run a periodic background service in java blackberry

Objective: I want to develop an UI application that runs a service/ task/method
periodically to update database. This service should start after
periodically even if my application is not active/visible/user exits
app. Similar to an Android Service .
I'm using BlackBerry Java 7.1 SDK eclipse plugin .
The options I came across are the following:
1) How to run BlackBerry application in Background
This link suggests that I extend Application instead of UIApplication. But I can't do that as my application has a user interface.
2) Make application go in background
I don't want my UI application to go in background, instead i just want my application to call the service periodically .
3) Run background task from MainScreen in BlackBerry?
This link suggests to run I a thread, but I don't think that if user exits my application then the thread will run in background
4) Blackberry Install background service from UI application?
This suggests using CodeModuleManager ,whose usage I'm unable to figure .
Please suggest what is the best way to achieve this objective or suggests any other better method .
I am new to blackberry so please pardon my ignorance.
To expand on Peter's Answer:
You will need to create two classes :
class BgApp extends Applicaton
class UiApp extends UiApplication
I guess you have already created the class that extends UiApplicaiton. So add another class that extends Application.
Then create a class that extends TimerTask and implement its run method to call the method that updates the database.
class UpdateDatabaseTask extends TimerTask
In the BgApp constructor, create a Timer. And schedule the UpdateDatabaseTask using the schedule(TimerTask, long, long) method.
Define alternate entry points, check the "Do not show on homescreen" and "auto run on startup" checkboxes for the bgapp's entry point.
It is easiest and simplest to use the builtin persistence mechanism (PersistentStore and Persistable interface) for storing data. Even if you use any other means like RecordStore or SQLDb, both UiApp and BgApp can use access the same database. The values updated by the bgapp will be accessible by the uiapp and vice-versa, automatically.
If you want to send a signal from bgapp to uiapp (for example when bgapp downloads new data you want the uiapp to reload the data instantaneously), post a Global Event (ApplicationManager.postGlobalEvent()) when the download is complete and listen for it in the screen that is displaying the data (GlobalEventListener interface).
There are code samples for each of these available as part of the SDK or search on the internet and you'll find a lot of implementations.
Good research, lots of interesting thoughts.
I think the best thing to do is to try the simple standard approaches and only make something more sophisticated if you need to.
Here are two options that would be regarded as 'standard', with brief advantages and disadvantages:
a) Make your UiApplication go to the Background
Instead of exiting when the user presses the 'close' button, your UiApplication will "requestBackground()". it will automatically be bought to the foreground when the user clicks on the icon, or selects your application from the task switcher. Then you can run a Thread whenever you want or in fact leave one running to update the database.
This is my preferred method. But you have to careful with memory management to make sure there are no leaks. And some people don't like the idea that the Application is visible on the Task Switcher all the time.
b) Alternate Entry
With this option, your one Application package contains two Applications, or more accurately, one Application and one UiApplication. The UiApplication is run when the user clicks on the icon. The Application runs as a background task, and updates the database for your UiApplication.
This looks like a more elegant solution, but introduces some possible communication issues, and is more difficult to debug.
In your case, since you are relatively new to BB, I would suggest that you use option a, and if you find it doesn't work for you, you will not find it that difficult to swap to option b.
And to comment on the Options you have already presented:
Sort of covered with option b
Option a
You are correct - if an Application exits, all the Threads are killed
Leaves the problem of creating the application in the first place and then debugging it. This is not really a solution for you, more an implementation method.
The above is brief, please ask if it is not clear.
This might help with b:
http://supportforums.blackberry.com/t5/Java-Development/Set-up-an-alternate-entry-point-for-an-application/ta-p/444847
Edit:
Editing this to respond to the questions and to expand on the alternative answer, which expanded on this one (bit circular I know...).
To answer the second question first, I agree with the other answer which states the alternate entry (background) and the foreground app can share an SQLite database.
With respect to how these two communicate, while they work just fine, personally I am not a great fan of Global Events because they are propagated to all Applications on the BlackBerry. You can achieve similar things in many alternative ways - the trick is to find something that is common to both applications so that they can communicate. To this end, I recommend using RuntimeStore. See this KB article:
http://supportforums.blackberry.com/t5/Java-Development/Create-a-singleton-using-the-RuntimeStore/ta-p/442854
Regarding how you persist your database, I like PersistentStore because it is present on all devices. But if you really have a database, and not persistent Objects, then SQLite seems the ideal thing to use. Personally I would not use RecordStore, but here is a discussion of the options:
http://supportforums.blackberry.com/t5/Java-Development/Introduction-to-Persistence-Models-on-BlackBerry/ta-p/446810
And just a clarification - in the example given, you have two applications, BgApp and UiApp. You will only have one main() method. This main method will use the args that you specify to determine which one to start, which it will create and have it "enter the dispatcher". If I could make a recommendation - use "gui" as the argument to specify that you will start your UiApplication. I have experienced a circumstance that the OS attempted to start my alternate entry Ui application with this String, regardless of what I had actually specified. Might have been a one off, but I have stuck to doing that ever since.
Finally two comments on the use of Timers and Timertask to provide triggered events. The first comment to make is whatever you run in the TimerTask should not take that long - so you should just use the TimerTask to initiate the download Thread (which might take a long time). Secondly for me, in this situation, I would not use Timer/TimerTask. I would rather just have a single Thread, which 'waits', and then processes. The advantage to me is that this can be adaptive. For example, if you fail to connect, then you might shorten the time till the next connection attempt. Or if it is after hours, then you might lengthen the time between connections to reduce battery usage. Or you might stop connecting completely when the battery is very low.
Hope this helps.

Should functional testing simulate UI events or check for preconditions?

I am struggling with this question since I noticed that many functional testing frameworks (like Selenium for the web or UISpec for iOS) actually simulate UI events while testing. I am asking: couldn't it be sufficient just to check for preconditions such as that, e.g., the target and selector for a button are set correctly and then fire the selector manually? Why do I need to simulate touches? This has the con that you have to know more about the UI elements you're testing (you have to know what makes them to behave correctly), but since I am the one writing the tests, maybe this doesn't matter?
Could anyone shed some light on this?
Simulating touches can be useful for determining crashes caused by obscure or unplanned user behaviour - a particularly common one is having two items pressed simultaneously. It also allows you to create potentially quite esoteric tests: for example, random user input for a sustained period of time to attempt to crash or break your application in ways you wouldn't expect. The level to which you'd do this would depend on your app, and how important it was to you.
Your alternative approach also has some disadvantages when it comes to multi-touch. Whilst it would be fairly straightforward to fire a button selector through some sort of automatic test rather than simulating user input, what happens if you have an app that deals with swiping, pinching, or other multiple input gestures? In those cases the desired result may not be as black and white as the on/off of the button: you may have many shades of grey and differing output that required validation.
Simulated UI testing actually has quite a long history - there's an interesting story (well, interesting to me) about the original MacPaint and how a random UI input test was able to assist in reproducing obscure or difficult crashes here: http://www.folklore.org/StoryView.py?story=Monkey_Lives.txt

OpenFeint achievements performance

I've decided to integrate OpenFeint into my new game to have achievements and leaderboards.
The game is dynamic and I would like user to be rewarded immediately for some successful results, but as it seems for me, OpenFeint's achievements are a bit sluggish and it shows visual notification only when it receives confirmation from the server.
Is it possible to change something in settings or hack it a little bit to show notification immediately as soon as it checks only local database if the achievement has not been unlocked it?
Not sure if this relates to the Android version of the SDK (which seems even slower), but we couldn't figure out how to make it faster. It was so unacceptably slow that we started developing our own framework that fixes most of open feint's shortcomings and then some. Check out Swarm, it might fit your needs better.
There are several things you can do to more tightly control the timing of these notifications. I'll explain one approach and you can use this as a starting point to explore further on your own. These suggestions apply specifically to iOS apps. One caveat is that these suggestions refer to internal APIs in OFSDK 2.8 for iOS and not ordinarily recommended for high level use and subject to change in future versions.
The first thing I recommend is that you build the sample app with your own product key. Use the standard sample app to experiment before applying the result to your own code.
You are going to get the snappiest response by separating the notification pop-up UI from the process of submitting the achievement. This way you don't have to worry about getting wrapped up in the logic for deciding whether the submission is going just to the local db or is doing the full confirmation on an async network transaction.
See the declaration of "showAchievementNotice" in "OFNotification.h". Performing a search in the sample app, you will see that this is the internal API used for displaying the achievement pop-up when an achievement is earned. It does not actually submit the achievement. You can call this method directly as it is called from "OFAchievementService.mm" to directly control when the message appears. You can then use the following article to disable the pop-up from being called when the actual submission occurs:
http://support.openfeint.com/dev/notification-pop-ups-in-ios/
This gives you complete freedom to call the submission at a later time provided you keep track of the need to do so. For example, you could locally serialize a flag to take care of the actual submission either after the level is done or the next time the app starts up. Don't forget that the user could quit out of a game without cleanly finishing a level.

Tips for finding things in your program that are broken that you don't know about?

I was working on something for a client today when I found a way to break some functionality in our program.
(The code is really legacy code, it's been in development for about 10 years and I've only been working here for about a year.)
It didn't cause an error, or cause the program to crash, but if a user was using the program and duplicated the behavior I'm pretty sure they'd be holding up their "WTF?" flag.
In our program we have named fields (textboxes) and static text (labels) that can be linked with the textboxes. When the textbox is not filled in the label(s) that were linked to them disappear.
The functionality that I broke was, when you change the name of a textbox that already has one label or more linked to it, and save the file, without re-associating the one or more labels associated with the textbox, the formerly-associated labels appear when the textbox is blank.
Now my thinking on the matter is that a simple observer pattern could have solved this problem in the first place, but then I didn't write the code.
I was thinking that if I could dig up more situations like this with the guys in my shop, that maybe I could talk them into considering unit testing, decoupling, applying patterns where they are called for and the like.
So for this reason I was wondering if anyone had any tips for finding broken (but not error causing) functionality in any sort of app (web-based, desktop, etc...)
For an app to fail usability, it has to have a defined set of expected behaviors.
"Is this textbox SUPPOSED to do nothing when the enter key is pressed?" Maybe it is, maybe it isn't. I've seen apps where a tester/reviewer reports something that they ASSUME should work another way, when in actuality the client specifically asked that they DON'T want the form submitted on a return key press, but only a submit button click.
So basically you have to define proper behaviour before you can determine incorrect behavior.
Hire some testers.
If it has an interface, then one of my favorite unconventional test is putting 5-10 year old children in front of it. You'd be surprised what they can come up with (especially the younger ones). While this may sound like a joke, it isn't -- it really works, because children don't have the mindset of only going through "mindset" paths.
And yeah, children are the experts in "breaking things" xP.
Code inspections, i.e. reading the source code: if you had taken time to read/inspect the source code, looking for "smells" or even just looking for code whose behaviour you don't immediately understand and agree with, you might have been holding up your "WTF?" flag too.
Test, test, test.
Do unexpected things. Start doing one task and switch another to see if anything goes haywire. Use the back button when you're not supposed to. Open it in two windows. Let it time out.
Test in all browsers, especially IE.
You can find database connections/sessions aren't released by:
working out the minimum number of connections you need to do something
setting resource limits to that minimum number
ensuring one "run" of the scenario that should use exactly that number (and release it afterwards)
then run it again a few times... do you run out of connections?
I used to work in a company where programmers regularly used to forget to de-allocate db connections. The standard answer was to reduce the resource to a minimum to see if there's a leak - and to try to work out where it is by restarting the system and running different scenarios repeatedly.
The first hour of code review, with the first reviewer, will do the most to find quality problems. But here's the thing: You don't need to convince people of quality problems. You need to convince them of the value of fixing bugs, and of rewriting only when the present quality absolutely justifies it.
I've dealt with some seriously bad code in my time. But you can't just rewrite. You need a spec before you can even tell if the rewrite is an improvement.
Sometimes, you have to infer the spec from the code and then check it against some human somewhere. But by the time you've done that, you understand the code as written and are now better prepared to repair than to rewrite -- most of the time.
Repair proceeds by a process of small behavior-preserving modifications that render the spec more clear in the code. Then, when you find something that looks wrong, you don't just change it. You ask around until you find the person responsible for that decision, and you get them to show you where in the spec it says that behavior X is correct. (This conversation can take many forms.) If you're lucky, they'll tell you that behavior X is in fact incorrect, and then you've earned your pay.
assert()
Also unit testing with coverage analysis.
This is particular to the Visual Studio IDE, although it probably also applies to others:
During testing, always at some point run in the debugger with "Break when an exception is thrown" turned on.
This can often help expose exceptions which are incorrectly being silently caught and which represent bugs, but otherwise may not be evident.
Code reviews should always also include reviews of the unit test code.
The problem is that with ad-hoc testing it's impossible to know how much or how well a developer has tested their code. So, you're at the mercy of different developers definition of the word "done".
If you include reviews of the unit test code at the same time you review the production code you should have a good idea of whether the code is really complete; in that "complete" includes "tested". Not just "Hey, I'll throw it over the wall to the testers!".