'rasa shell' does not intiate a chat session as expected - chatbot

The command 'rasa shell' is supposed to start a chat session in the terminal itself upon its execution according to the documentation. But in my case, it's acting as given in the below image.
But the output is supposed to be a 2-way communication between the bot and the user as given below.
Your input -> hi
<Bot's response to 'hi'>
Your input -> something
<Bot's response to 'something'>
May I know what the reason for the above matter is? (Please note that I noticed a similar question to mine here). Since I found it not descriptive enough, I have posted this question.

Check if:
There aren't any redundant intent names in domain.yaml file.
There aren't any missing quotation marks in responses.

Related

IBM Assistant(Conversation) Error: SpelEvaluationException when evaluating dialog node ID

I have a flow in my chatBot application where I switch workspaces and its giving me SpelEvaluationException error.
I have a router workspace that determine initial indent of the client, once I know the initial intend I route the next request to appropriate workspaces
Workspace Router :
Bot :- Hey this is an awesome bot, what do you need help with
1. Apples
2. Bananas
3. Oranges
Client :- I need help with my apples
--- I pass a custom JSON from the workspace with tells my app to route next request to apples workspace ----
Apple Workspace :
BOT: Hey what can I help you in apples .
The flow works fine but when I send request to Apples workspace. I get the following error in log_message .
SpelEvaluationException when evaluating dialog node ID [node_2_1517933972148]. The syntax of condition [intents[0].confidence < 0.50] is valid, but cannot be evaluated. Check that objects in expression are not null or out of bounds.\nSpEL evaluation error: EL1025E: The collection has '0' elements, index '0' is invalid\n
So somehow you are asking Watson to evaluate the intents array before actually passing any input, so no intent data is returned, thus the spell expression fails and throws the error.
So however you are calling that second Apples workspace make sure you have input text being sent as well.
The same thing happened to me, you can try to jump by response so that the condition is not evaluated and check if you are not trying to save the intent in a variable inside the JSON. Possibly you already solved it but I leave my proposal hoping it will serve someone else.

Why doesn't elapsed_time() work from log_message() even if called from post_system hook?

If I try to log benchmark->elapsed_time() in post_system hook, it just logs {elapsed_time} as if I called it from a controller.
CodeIgniter documentation says:
"post_system Called after the final rendered page is sent to the browser, at the end of system execution after the finalized data is sent to the browser."
It also says you are supposed to echo the elapsed_time() in a view to show it to the user, but... how is it possible that elapsed_time() is still being calculated after sending the finalized data to the browser?
I feel being lied to.
People keep saying I should use my own marks and get the difference, but that's not the same as using this...
Turns out the documentation also says:
"If the first parameter is empty this function instead returns the {elapsed_time} pseudo-variable. This permits the full system execution time to be shown in a template. The output class will swap the real value for this variable."
I went to the Output class and found the 2 marks it is using: total_execution_time_start and total_execution_time_end and I can use those in the post_system hook.

Why watson Personality Inisights shows different results using different API versions/demo

My apologies if the question is duplicated. We are facing an issue with the analysis of a profile using Watson Personality Insights API in Spanish. We have a demo we implemented using PI API version 2 and then we tested the results (exact same text) with the demo published on developer cloud(in spanish) and we found important differences on how the big five were calculated when the facet values were not that different. Is it possible that these differences are caused because of the API version? The issue that with our demo the big five values produced a kind of negative summary profile when the developercloud summary is kinder.
We could send both result jsons. For example here is how the big five rated:
BigFive DeveloperCloud Demo V2
Openness 0.773834349 0.847273232
Conscientiousness 0.916616088 0.914907481
Extraversion 0.796331544 0.612606551
Agreeableness 0.17445636 0.096118648
Emotional range 0.036287447 0.01623536
thanks in advance!!
So the API version would not make a difference, as that just governs the format of the API; the back-end models are the same for both v2 and v3 of the API.
So the jist of your question is that when you run the same text in your app, and in the demo you get different big5 results, while the facet values are about the same.
This might be easiest solved by you opening a support ticket so we can debug the issue together; if you'd rather not do that then can you provide a sample text? Typically it boils down to a difference in the way the text is parsed.
Another question; did you try making the request using curl? That would cut out any custom logic in your app and narrow down the problem.
thanks Neil for you answer!
We tested the text using CURL and we noticed that the results didn't change by the service version used but instead by how the text was sent. If we called the service using curl passing a plain text input(formatted in UTF-8 with line breaks) it returned the same results for version2 and version 3 and also matched the ones from our demo. If we called the service using curl passing json input WITHOUT line breaks it returned the same values as well. But if we called the service passing the json input WITH line breaks then the results changed and almost matched those shown by ibm demo. My question here is which are the correct results? The ones shown when the text is sent as a plain text input(with line breaks) or when the text is sent as json input(with line breaks)? Is there any technical guideline besides the one shown in developercloud on how the text should be parsed to use this service?
Thanks again!

Get Status of Driver Inside Visual Basic

I'm currently trying to detect which drivers appear as "not working" in visual basic.
This unknown device is a good example of what I'm trying to grab (notice how it has the flag DN_HAS_PROBLEM).
I've tried using searches such as:
Dim searcher As New ManagementObjectSearcher( "root\CIMV2", "SELECT * FROM Win32_SystemDriver")
And running a loop in the searcher.Get() through this documentation
However, none of these seem to return what I am looking for.
Would anyone happen to know how I can get the DN_ statuses within Visual Basic?
Thanks!
The Win32_SystemDriver class documentation lists these Status properties:
OK
Error
Degraded
Unknown
Pred Fail
Starting
Stopping
Service
Stressed
NonRecover
No Contact
Lost Comm
...whereas DN_HAS_PROBLEM comes from the CM_Get_DevNode_Status function, or perhaps also from other system calls.
There may not be a way to get that specific code from the API you're using, but perhaps the existing Status properties will suffice for your needs if you don't need to know more specific failure reasons.
If you do need to know that specific status, you'll have to call other APIs, like the one I called out.

Perl Net::Appliance::Session waitfor?

I have a problem with Net::Appliance::Session. I created a session, executed my command. After execution it prompts me some question (yes/no). I want to answer it but didn't find a way how to do it. Below you can see my trials:
$session->cmd($command);
$session->waitfor(Match=>'/.*yes*/');
$session->print("no");
$session->waitfor(Match=>'');
$session->print("y");
I don't know where is the problem. Accoding to CPAN documentations Net::Telnet have the method waitfor. But Session documentation tells that we can use waitfor(). Another thing said there is that the method "cmd" have a member Match which includes all the features of waitfor(). So I changed my code like below:
$session->cmd($command, Match=>'/.*yes*/');
$session->print("no");
Executing this reports below error:
Odd number of elements in hash assignment at
/usr/lib/perl5/vendor_perl/5.8.8/Net/Appliance/Session.pm line 245.
Is there any idea how can I do that? And why am I getting this error message?
Thanks in advance..
From the Net::Appliance::Session page at meta::cpan
To handle more complicated interactions, for example commands which prompt for confirmation or optional parameters, you should use a Macro. These are set up in the phrasebook and issued via the $s->macro($name) method call. See the Phrasebook and Cookbook manual pages for further details.
So you set up a macro (scripted call and response) in a phrasebook, and then tell your session to use that macro.