Unable to expand conversation in Botium report - chatbot

In botium test report using mocha - reporter mochawesome we get Utterances wise result 
ex -
Welcome/Welcome_input-L1
Welcome/Welcome_input-L2
Can we get actual conversation dialog in report. I have tried
"expandConvos": true,
"expandUtterancesToConvos": true
but i think it is not for reporting purpose. Can we expand conversation in report with request and response ?.

Botium CLI is using Mocha internally as a test runner. It supports all types of report formats supported by Mocha: https://mochajs.org/#reporters
Additionally, there is the mochawesome reporter included as well as a CSV exporter. Configuration is done with a positional parameter:
botium-cli run csv
Some reporters accept additional options, those can be specified with the --reporter-options switch
The rendering of the test output is up to the selected reporter - mochawesome is not in any way special to chatbot testing, so it doesn't support rendering of the conversation. If you want this kind of rendering, you will either have to code yourself (and make a pull request to the Github project), or use Botium Box

Related

Pass string as link in allure report message

I am trying to pass a link in assertion error message in the allure report for example:
assert expression, '...www.google.com...'
locally this isn't a problem as i see that in the terminal web addresses are formatted automatically to a URL however when i run the same test on a remote machine the generated allure report shows the same address as a regular string.
I tried using parselib.parse functions (like quote and urlparse) but it didn't help.
I also couldn't find any questions or answers to that anywhere online, the only option i found was attaching the link to the allure report but that is not my goal.
I would like to be able to click on the link appearing in the assertion error message as it may contain several links that i don't want to attach to the report itself.
Example report - marked in this report is the links i want to be clickable

Google Actions CLI 3.1.0 version and actions.intent.TEXT

I want to be able to talk with Google Assistant, but connect the Actions project directly to an NLP service I already have running on my server. In other words, NOT use dialogflow.
All the following examples show how to do this.
With Rasa
https://blog.rasa.com/going-beyond-hey-google-building-a-rasa-powered-google-assistant/
With LUIS
https://www.grokkingandroid.com/using-the-actions-sdk/
https://dzone.com/articles/using-the-actions-sdk-for-google-assistant-develop
With Watson
https://www.youtube.com/watch?v=no0R0bSkHXc
They use the actions.intent.MAIN as the invocation and actions.intent.TEXT for all other utterances from the talker.
This is what I need. I don’t want to create a load of intents, with utterance phrases, inside the Action because I just want all the phrases spoken by the talker to be passed to my server, and for my NLP service to deal with them.
So I set up a new Action project, install the Actions CLI and then spend 3 days trying all possible combinations without success, because all these examples are using gactions cli 2.1.3 and Google have now moved on to gactions cli 3.1.0.
Not only have the commands changed, but so too has the file formats and structure.
It appears there is also a new Google Actions Console, and actions.intent.TEXT is no longer available.
My Action is webhook connected to my server, but I cannot figure out how to get the action.intent.TEXT included and working.
Everything I find, even here
Publishing Actions on google without Dialogflow
is pre version update and follows the same pattern.
Can anyone point to an up-to-date, v3.1.0, discussion, tutorial or example about how to send all talker phrases through to an NLP that isn’t dialogflow, or has Google closed that avenue?
Is it possible to somehow go back and use the 2.1 CLI either with the new Console or revert the console back. (I have both CLI versions, I can see how different their commands are)
Is it possible to go back and use 2.1?
There is no way to go back to AoG 2. You probably also don't want to do so - newer features aren't available with v2 and are only available with v3.
Can I use my own NLP with v3?
Yes, although it isn't as obvious, and there are some changes in semantics.
As an overview, what you'll need to do is:
Create a Type that can accept "Free form text". I usually call this type "Any".
In the console, it looks something like this:
Create a Custom Intent that has a single parameter of this Any Type and at least one phrase that captures everything for this parameter. (So you should add one training phrase, highlight the entire phrase, and set it for the parameter. Sometimes I also add additional phrases that includes words that I don't want to capture.) I usually call the Intent "matchAny" and the parameter "any".
In the console, it could be something like this:
Finally, you'll have a Scene that you transition to from the Main invocation. When it matches the "matchAny" Intent, it should call your webhook with a handler name. Your webhook will be called with the "any" parameter set with the user utterance. (Note that the JSON has also changed.
Again, the console might have it looking something like this:
That seems like a lot of work. Isn't there just some way to do all that from the command line?
Yes. You can do all of that in the configuration files that the CLI accesses and then upload it. (You can then also use the console to review the configuration, if necessary, to make sure they're configured as you expect. You can shift back and forth between them as appropriate.)
Google also has a github repository that contains most of the files pre-configured for this sort of setup.
You will need to update the configuration from the repository to handle the webhook correctly (it includes code to illustrate what is happening using the inline code editor) and to add your project ID.

Use openapi service in Flutter Drive integration test, but run into dart:ui problem

The app I am trying to test makes use of feature toggles to enable/disable certain parts of the app. However, the tests I've written are for all the features. When a user logs in, this will fetch the feature toggles from a REST service (using a class which uses the generated openapi) so the app knows what to show and what not to show.
Now I want to include those feature toggles in my tests, so that the corresponding tests are skipped and don't just fail if some parts aren't enabled. However, when I try to include the class that does the call, I get problems with dart:ui in the console, and the test no longer runs. When I (recursively) check the imports on those service classes, there are some imports to widgets.dart, so I guess that's the problem. I tried removing most of it, but since we're using Localized strings for error messages etc. it's getting to be a very cumbersome job to remove all of that from those files.
So before I continue doing that, I was wondering if there is any easy way to include a call to a REST service in an integration test?
I checked the Flutter drive documentation, and searched for some similar questions online but haven't really found anything similar.

How to make a systematic report in protractor

I am working on protractor for testing an angularjs application. I am also able to fetch the report but I want to mention some more points and details about the test execution. For example Its model name, Test case name, Severity, Priority, Where the test failed if it gets fails etc. Where should I add all this points so that I can be able to fetch a complete detailed report.Currently I am able to get the report I have attached here.
Please help me in getting the solution as I am new to protractor. Thanks a lot in advance.
Jasmine framework does specs reporting, not Protractor in e2e testing. You can either leverage some of the popular ones listed below or need to create your own using custom reporter.
https://www.npmjs.com/package/jasmine-spec-reporter
https://www.npmjs.com/package/protractor-html-reporter
https://www.npmjs.com/package/protractor-beautiful-reporter
http://jasmine.github.io/2.1/custom_reporter.html
Or you can try with allure plugin here http://allure.qatools.ru/
I also advice to use allure report. It is easy to setup and has a good documentation. Just want to mention that there is Allure 2 is ready. Take a look at Git Hub and integration for JS

Sending one single email for multiple builders

I am setting up a build environment with a unique master buildbot and multiple buildslaves. I have multiple builders which will run on the available slaves. The builders can be triggered on-force or scheduled to run as nightly builds or can be scheduled to run when some changes are detected.
I have setup a MailNotifier to send the results/status of the builds. This MailNotifier will send one email for each of the builder. What I want to do now is to send a single email for multiple builders. For e.g. all the nightly builders after successfully building, trigger some function in master buildbot which will trigger the buildbot to send a single email which includes the results of all the nightly builders.
I would like to know whether something like this is possible and whether buildbot provides support to send a single email for multiple builders. If not any pointer how to accomplish this ??
Thanks in advance !!
You are looking for buildSetSummary parameter to MailNotifier: if you set this parameter to True, it will send a single e-mail listing statuses of all completed builds.
More information: Buildbot Manual
It's been quite a while since you asked this but just in case you still need an answer, look at the settings for buildbot's MailNotifier. The default behaviour is to send an e-mail for each builder, so you have to specify which ones you're interested in using the builders argument (scroll down the page):
builders (list of strings). A list of builder names for which mail
should be sent. Defaults to None (send mail for all builds). Use
either builders or categories, but not both.
Hope this is what you were looking for!
Create a TriggerableScheduler with all your builders in builderNames. Then, create a "super" builder with the following 2 steps:
trigger the new TriggerableScheduler with waitForFinish=True
send the email