Use openapi service in Flutter Drive integration test, but run into dart:ui problem - flutter

The app I am trying to test makes use of feature toggles to enable/disable certain parts of the app. However, the tests I've written are for all the features. When a user logs in, this will fetch the feature toggles from a REST service (using a class which uses the generated openapi) so the app knows what to show and what not to show.
Now I want to include those feature toggles in my tests, so that the corresponding tests are skipped and don't just fail if some parts aren't enabled. However, when I try to include the class that does the call, I get problems with dart:ui in the console, and the test no longer runs. When I (recursively) check the imports on those service classes, there are some imports to widgets.dart, so I guess that's the problem. I tried removing most of it, but since we're using Localized strings for error messages etc. it's getting to be a very cumbersome job to remove all of that from those files.
So before I continue doing that, I was wondering if there is any easy way to include a call to a REST service in an integration test?
I checked the Flutter drive documentation, and searched for some similar questions online but haven't really found anything similar.

Related

Google Actions CLI 3.1.0 version and actions.intent.TEXT

I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.

UIATable operation GetCellValue throws error "Failed due to a lacking or broken API call inherited from UI Automation"

I am using the UI Automation add-in to automate and test an application that contains HTML objects in a Java window. I have the UIATable identified & saved in my object repository and the following methods work fine:
MsgBox UIAWindow("**").UIAObject("**").UIATable("**").RowCount 'Prints 3
MsgBox UIAWindow("**").UIAObject("**").UIATable("**").ColumnCount 'Prints 5
However, when I try to get cell value using any of the below methods:
MsgBox UIAWindow("**").UIAObject("**").UIATable("**").GetCellValue(1,1) 'Error
MsgBox UIAWindow("**").UIAObject("**").UIATable("**").GetCellData(1,1) 'Error
MsgBox UIAWindow("**").UIAObject("**").UIATable("**").GetCellName(1,1) 'Error
I get an error pop up with the following message:
The test run cannot continue due to an unrecoverable error.
<0x80070057> Failed due to a lacking or broken API call inherited from
UI Automation.
I am using UFT 14.02. What might be the possible reason for this error and is there something I can do to resolve this?
Have a look at the UFT 14 Product Availability Matrix. You want the section "UFT GUI Testing UI Automation Add-in".
JavaFX is supported but HTML is not supported by the UI Automation Framework in UFT.
That might be why some methods work and others do not. i.e. You can read the java table, but cannot validate the html content.
(I assume you are testing against a javaFX application? - you just say java)
It's worth saying that "Not Supported" doesn't mean it will not work, just that it's not been fully tested and certified by Microfocus.
Additionally, if you check the support pages it has a big note:
Note: The test objects and methods available are completely dependent on the properties and patterns implemented in your application. We recommend that you familiarize yourself with the properties of your application's objects - specifically the Control Type IDs and supported patterns to understand what test objects and methods you can use.
So the error might not be you, and might not be UFT. It might be a result of the delivery of the application under test.
Things you can try...
Try the actual java add-in - it is possible to use multiple add-ins concurrently - even if it's a work around for just one object.
Try the standard windows object identifiers.
Confirm the application is built to support Microsoft's UI Automation
Update to the latest UFT (UFT 15.01 at time of writing, now also called UFT One) to make sure your libraries are as up to date as possible
If all that fails let me know. UFT is very flexible around the GUI and depending on how you need to interact with the table there are some other solutions we can try.

What do you lose by ejecting a React app that was created using create-react-app?

I'm interested in using Hot Module Replacement with a newly created React app.
Facebook Incubator's create-react-app uses Webpack 2 which can be configured to support HMR, however in order to do so, one needs to "eject" the create-react-app project.
As the documentation points out, this is a "one way" operation and cannot be reversed.
If I'm to do this, I want to know what I might be giving up. I've been unable to locate any documentation that explains the potential drawbacks of ejecting.
The current configuration allows your project to get updates from create-react-app core team. Once you eject you no longer get this.
It's kind of like pulling in bootstrap css via CDN as opposed to downloading the source code and injecting it directly into your project.
If you want more control over your webpack, there are ways to configure/customize it without ejecting:
https://www.npmjs.com/package/custom-react-scripts

how can I improve iPhone UI Automation?

I was googling a lot in order to find a solution for my problems with UI Automation. I found a post that nice summarizes the issues:
There's no way to run tests from the command line.(...)
There's no way to set up or reset state. (...)
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
There's no way to programmatically retrieve the results of the test run. (...)
source: https://content.pivotal.io/blog/iphone-ui-automation-tests-a-decent-start
Problem no. 3 can be solved with jasmine (https://github.com/pivotal/jasmine-iphone)
How about other problems? Have there been any improvements introduced since that post (July 20, 2010)?
And one more problem: is it true that the only existing method for selecting a particular UI element is adding an accessibility label in the application source code?
While UI Automation has improved since that post was made, the improvements that I've seen have all been related to reliability rather than new functionality.
He brings up good points about some of the issues with using UI Automation for more serious testing. If you read the comments later on, there's a significant amount of discussion about ways to address these issues.
The topic of running tests from the command line is discussed in this question, where a potential solution is hinted at in the Apple Developer Forums. I've not tried this myself.
You can export the results of a test after it is run, which you could parse offline.
Finally, in regards to your last question, you can address UI elements without assigning them an accessibility label. Many common UIKit controls are accessible by default, so you can already target them by name. Otherwise, you can pick out views from their location in the display hierarchy, like in the following example:
var tableView = mainWindow.tableViews()[0];
As always, if there's something missing from the UI Automation tool that is important to you, file an enhancement request so that it might find its way into the next version of the SDK.
Have you tried IMAT? https://code.intuit.com/sf/sfmain/do/viewProject/projects.ginsu . It uses the native javascript sdk that Apple provides and can be triggered via command line or Instruments.
In response to each of your questions:
There's no way to run tests from the command line.(...)
Apple now provides this. With IMAT, you can kick off tests via command line or via Instruments. Before Apple provided the command line interface, we were using AppleScript to bring up Instruments and then kick off the tests - nasty.
There's no way to set up or reset state. (...)
Check out this state diagram: https://code.intuit.com/sf/wiki/do/viewPage/projects.ginsu/wiki/RecoveringFromTestFailures
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
Agreed. Both IMAT and tuneup.js (https://github.com/alexvollmer/tuneup_js#readme) allow for this.
There's no way to programmatically retrieve the results of the test run. (...)
Reading the trailing plist file is not trivial. IMAT provides a jUnit like report after a test run by reading the plist file and this is picked up by my CI Tool (Teamcity, Jenkins, CruiseControl)
Check out http://lemonjar.com/blog/?p=69
It talks about how to run UIA from the command line
Try to check the element hierarchy, the table can be placed over a UIScrollView.
var tableV = mainWindowTarget.scrollViews()[0].tableViews()[0].scrollToElementWithName("Name of element inside the cell");
the above script will work even the element is in 12th cell(but the name should be exactly the same as mentioned inside the cell)

jUnit testing with Google Web Toolkit

I would like to be able to run a set of unit tests by linking to them in my application (e.g. I want to be able to click on a link and have it run a set of jUnit tests). The problem is that GWT and jUnit don't seem to be designed for this capability -- only at build time can you run the tests it seems.
I would like to be able to include my test code in my application and, from onModuleLoad for example, run a set of tests.
I tried to just instantiate a test object:
StockWatcherTest tester = new StockWatcherTest();
tester.testSimple();
but I get:
No source code is available for type com.google.StockWatcher.client.StockWatcherTest;
even though I include the module specifically.
Would anyone know a way to do this? I just want to be able to display the test results within the browser.
If you are trying to test UI elements in GWT using JUnit, unfortunately you may not do so. JUnit testing is limited to RPC and non-UI client-side testing. See this thread for a great discussion on what you can and cannot do with GWT jUnit testing.
If you are not trying to test UI elements, but are instead trying to inject your RPC code or client-side logic with test values (hence why you want to be able to click on a link and run a set of JUnit tests), then you should follow the following guide from testearly.com: Testing GWT with JUnit. In short, you should make sure that the method you are testing does not include any UI elements and if the method you are testing is asynchronous in nature, you must add a timer.
In 2.0, HTMLUnit was added. You may wish to use this instead of firing up a browser each time you wish to test.