Botium CLI tests for rasa chatbot passing for any random convo (that should not pass) - chatbot

I'm trying to setup Botium CLI with my rasa chatbot for automated integration testing and dialog flow tests. However, the botium framework passes tests that do not describe a conversation flow that would be possible with my chatbot.
I'm using it with botium-connector-rasa and this is my botium.json config file:
{
"botium": {
"Capabilities": {
"PROJECTNAME": "change me later",
"CONTAINERMODE": "rasa",
"RASA_MODE": "DIALOG_AND_NLU",
"RASA_ENDPOINT_URL": "http://localhost:8080/"
},
"Sources": {},
"Envs": {}
}
}
When I try to run botium-cli pointing --convos to my folder with the .convos.txt files, it passes the tests even if they should have failed.
.convo.txt file:
Test case 02: Robots' hell
# me
random question
# bot
random answer
Command used for running the tests:
botium-cli run --config botium.json --convos ./convos/
The output is this
What is going on? Why is botium passing my random tests when it should've failed these tests?
I've tried to talk with the bot using the emulator and if i run botium-cli emulator it works properly and I can communicate with my chatbot as expected.

The issue was in the .convo.txt files' syntax.
I just had to remove the spaces between the # and the me/bot. The provided example convo should look like this instead:
Test case 02: Robots' hell
#me
random question
#bot
random answer

Related

Printing the Console output in the Azure DevOps Test Run task

I am doing some initial one off setup using [BeforeTestRun] hook for my specflow tests. This does check on some users to make sure if they exist and creates them with specific roles and permissions if they are not so the automated tests can use them. The function to do this prints a lot of useful information on the Console.Writeline.
When I run the test on my local system I can see the output from this hook function on the main feature file and the output of each scenario under each of them. But when I run the tests via Azure DevOps pipleine, I am not sure where to find the output for the [BeforeTestRun] because it is not bound a particular test scenario. The console of Run Tests Tasks has no information about this.
Can someone please help me to show this output somewhere so I can act accordingly.
I tried to use System.Diagnostics.Debug.Print, System.Diagnostics.Debug.Print, System.Diagnostics.Debug.WriteLine and System.Diagnostics.Trace.WriteLine, but nothing seems to work on pipeline console.
[BeforeTestRun]
public static void BeforeRun()
{
Console.WriteLine(
"Before Test run analyzing the users and their needed properties for performing automation run");
}
I want my output to be visible somewhere so I can act based on that information if needed to.
It's not possible for the console logs.
The product currently does not support printing console logs for passing tests and we do not currently have plans to support this in the near future.
(Source: https://developercommunity.visualstudio.com/content/problem/631082/printing-the-console-output-in-the-azure-devops-te.html)
However, there's another way:
Your build will have an attachment with the file extension .trx. This is a xml file and contains an Output element for each test (see also https://stackoverflow.com/a/55452011):
<TestRun id="[omitted]" name="[omitted] 2020-01-10 17:59:35" runUser="[omitted]" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Times creation="2020-01-10T17:59:35.8919298+01:00" queuing="2020-01-10T17:59:35.8919298+01:00" start="2020-01-10T17:59:26.5626373+01:00" finish="2020-01-10T17:59:35.9209479+01:00" />
<Results>
<UnitTestResult testName="TestMethod1">
<Output>
<StdOut>Test</StdOut>
</Output>
</UnitTestResult>
</Results>
</TestRun>

Rasa WebChat integration

I have created a chatbot on slack using Rasa-Core and Rasa-NLU by watching this video : https://vimeo.com/254777331
It works pretty well on Slack.com. But what I need is to add this to our website using a code snippet. When I looked up on that, I was able to find out that RASA Webchat (https://github.com/mrbot-ai/rasa-webchat : A simple webchat widget to connect with a chatbot ) can be used to add the chatbot to the website. So, I pasted this code on my website inside the < body > tag.
<div id="webchat"/>
<script src="https://storage.googleapis.com/mrbot-cdn/webchat-0.4.1.js"></script>
<script>
WebChat.default.init({
selector: "#webchat",
initPayload: "/get_started",
interval: 1000, // 1000 ms between each message
customData: {"userId": "123"}, // arbitrary custom data. Stay minimal as this will be added to the socket
socketUrl: "http://localhost:5500",
socketPath: "/socket.io/",
title: "Title",
subtitle: "Subtitle",
profileAvatar: "http://to.avat.ar",
})
</script>
“Run_app.py” is the file which starts the chatbot ( It’s available in the video : https://vimeo.com/254777331 )
Here is the code of Run_app.py :
from rasa_core.channels import HttpInputChannel
from rasa_core.agent import Agent
from rasa_core.interpreter import RasaNLUInterpreter
from rasa_slack_connector import SlackInput
nlu_interpreter = RasaNLUInterpreter('./models/nlu/default/weathernlu')
agent = Agent.load('./models/dialogue', interpreter = nlu_interpreter)
input_channel = SlackInput('xoxp-381510545829-382263177798-381274424643-a3b461a2ffe4a595e35795e1f98492c9', #app verification token
'xoxb-381510545829-381150752228-kNSPU0X7HpaS8oJaqd77TPQE', # bot verification token
'B709JgyLSSyKoodEDwOiJzic', # slack verification token
True)
agent.handle_channel(HttpInputChannel(5004, '/', input_channel))
I want to connect this python chat-bot to the “Rasa-webchat” instead of using Slack. But I don’t know how to do that. I tried looking everywhere, But I couldn’t find anything helpful on the internet. Can someone help me? Thank you.
In order to connect Rasa Core with your web chat do the following:
Create a credentials file (credentials.yml) with the following content:
socketio:
user_message_evt: user_uttered
bot_message_evt: bot_uttered
Start Rasa Core with the following command (I assume you have already trained your model):
python -m rasa_core.run \
--credentials <path to your credentials>.yml \
-d <path to your trained core model> \
-p 5500 # either change the port here to 5500 or to 5005 in the js script
Since you specified the socketio configuration in your credentials file, Rasa Core automatically starts the SocketIO Input Channel which the script on your website then connects to.
To add NLU you have to options:
Specify the trained NLU model with -u <path to model> in your Rasa Core run command
Run a separate NLU server and configure it using an endpoint configuration. This is explained here in depth
The Rasa Core documentation might also help you.
In order to have a web channel, you need to have a front-end which can send and receive chat utterances. There is an opensource project by scalableminds. Look at the demo first
demo
To integrate your Rasa bot with this chatroom, you can install the chatroom project as shown in the below Github project. It works with latest 0.11 Rasa version as well.
Chatroom by Scalableminds
You are facing a dependency issue, look for what version of rasa you are using and what version of web-chat.
webchat doesn't support rasa version 2+

Importing results from X-ray api updates each test's labels

I am using X-Ray API to import NUnit TestResult.xml result file to Jira. The scenarios that are being run are present in Jira as Test (XRay). Each NUnit scenario has a tag that matches the Jira test key, there are other tags which are not relevant to the Jira test. When the result is imported, new execution is created and the tests are matched and added to that execution. The problem I am facing is that the Jira tests are updated with new labels (the ones that are present in nunit).
Is it possible to disable editing of labels for tests in Jira and to only have the tests added to the execution as-is and to only have their status changed?
Steps:
Run any nunit test containing scenarios that can be matched to Jira tests.
Import TestResults.xml using the "rest/raven/1.0/import/execution/nunit/multipart" endpoint.
New execution is created with Jira, and existing tests are matched based on their key and added to the execution.
Notice that the execution tests in Jira are updated with added labels from the TestResults.xml file and additional labels generated by the test name and any error screenshot names.
My info.json file:
{
"fields": {
"project": {
"key": "SB"
},
"summary": "Automatic result import from automation run",
"issuetype": {
"name": "Test Execution"
}
}
}
Specflow scenario that is executed:
#Regression #SB_110325
#Web #ResponsiveDesktop
Scenario: Favorites for Log in user
Given Home page is open
And I login successfully
Updated Jira test after the imported result:
Notice only Regression_pack was the original label.
Update: Currently this is not possible. I have reported this to the developers and an improvement task was created.
If other people need this implemented, they can vote for it here: External link so it will be picked up and implemented by the developers.

Setting headless as CLI arg

My Protractor suite generally uses the Chrome non-headless mode so the tests can be monitored and stuff, but I tend to switch often between headless and normal while writing tests. Constantly changing the conf.js file is a hassle so I'd like to be able to do this via a command line argument. Something like the following:
npm test -- --headless
npm test-headless
As you can see I'm running Protractor via npm, so a complex argument construction is not a problem here.
I haven't been able to find a way to do this using uncle Google. Can someone point me in the right direction?
keep it simle. Create two protractor.conf files:
- one for local (non headless purpose) - protractor.local.conf
- another for headless purpose that you have already had
And create some scripts that will run what you need, for example:
"scripts": {
"test-headless": "node ./config/protractor.headless.conf.js",
"test-local": "node ./config/protractor.local.conf.js",
}
A somewhat hackish solution, but it works! Check for this flag (or any other) in your configuration file using node's process.argv global. You can then dynamically configure Protractor accordingly. This is one of the great perks of JS config files.
For example:
const isHeadless = process.argv.includes('--headless');
exports.config = {
capabilities: {
browserName: 'chrome',
chromeOptions: {
args: [
isHeadless && '--headless'
].filter(Boolean)
}
}
};
This does raise a warning in the Protractor CLI:
Ignoring unknown extra flags: headless. This will be an error in
future versions, please use --disableChecks flag to disable the
Protractor CLI flag checks.
As with all hacks you use them with some level of risk. The --disableChecks flag will get rid of the warning and future error, but may introduce other issues.

Is there a way to run a single test within the e2e tests in Kubernetes?

I am trying to run a single set of a single set of tests within the e2e Kubernetes tests. I am quite confused as to how the tests are organized, is there a comprehensive list of all the tests?
Thanks!
Assuming, the tests are placed at ./tests/e2e path in the repository.
If the tests are written in go, they are mostly written using a standard testing library or ginkgo framework.
For running the tests written using standard testing package
Add the tags to a specific test at the start of your test file like,
// +build <my-test>
Run the tests by specifying the tag name, go test -v ./tests/e2e -tags <my-test>
For running the tests written using ginkgo
go test -ginkgo.dryRun ./tests/e2e/... to list all the tests in the package.
go test -ginkgo.focus "<regex>" ./tests/e2e/... to run specific tests mentioned in the focus regex field.
go test -ginkgo.skip "<regex>" ./tests/e2e/... to skip the specific tests mentioned in the regex field
If you have e2e.test binary, you can list all available test by setting following flag: ./e2e.test --ginkgo.DryRun. Then if you want a single test, type: ./e2e.test --ginkgo.Focus="<name of your test>", pay attentention that all special characters in the test name must be escaped. For example, if you want run only conformance tests: --ginkgo.Focus="\[Conformnce\]".
Just in case, the right way of running particularly focused e2e tests is described officially here: https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md
it would be something like this:
go run hack/e2e.go -- --test --test_args="--ginkgo.focus=${matching regex}"