Rasa WebChat integration - chatbot

I have created a chatbot on slack using Rasa-Core and Rasa-NLU by watching this video : https://vimeo.com/254777331
It works pretty well on Slack.com. But what I need is to add this to our website using a code snippet. When I looked up on that, I was able to find out that RASA Webchat (https://github.com/mrbot-ai/rasa-webchat : A simple webchat widget to connect with a chatbot ) can be used to add the chatbot to the website. So, I pasted this code on my website inside the < body > tag.
<div id="webchat"/>
<script src="https://storage.googleapis.com/mrbot-cdn/webchat-0.4.1.js"></script>
<script>
WebChat.default.init({
selector: "#webchat",
initPayload: "/get_started",
interval: 1000, // 1000 ms between each message
customData: {"userId": "123"}, // arbitrary custom data. Stay minimal as this will be added to the socket
socketUrl: "http://localhost:5500",
socketPath: "/socket.io/",
title: "Title",
subtitle: "Subtitle",
profileAvatar: "http://to.avat.ar",
})
</script>
“Run_app.py” is the file which starts the chatbot ( It’s available in the video : https://vimeo.com/254777331 )
Here is the code of Run_app.py :
from rasa_core.channels import HttpInputChannel
from rasa_core.agent import Agent
from rasa_core.interpreter import RasaNLUInterpreter
from rasa_slack_connector import SlackInput
nlu_interpreter = RasaNLUInterpreter('./models/nlu/default/weathernlu')
agent = Agent.load('./models/dialogue', interpreter = nlu_interpreter)
input_channel = SlackInput('xoxp-381510545829-382263177798-381274424643-a3b461a2ffe4a595e35795e1f98492c9', #app verification token
'xoxb-381510545829-381150752228-kNSPU0X7HpaS8oJaqd77TPQE', # bot verification token
'B709JgyLSSyKoodEDwOiJzic', # slack verification token
True)
agent.handle_channel(HttpInputChannel(5004, '/', input_channel))
I want to connect this python chat-bot to the “Rasa-webchat” instead of using Slack. But I don’t know how to do that. I tried looking everywhere, But I couldn’t find anything helpful on the internet. Can someone help me? Thank you.

In order to connect Rasa Core with your web chat do the following:
Create a credentials file (credentials.yml) with the following content:
socketio:
user_message_evt: user_uttered
bot_message_evt: bot_uttered
Start Rasa Core with the following command (I assume you have already trained your model):
python -m rasa_core.run \
--credentials <path to your credentials>.yml \
-d <path to your trained core model> \
-p 5500 # either change the port here to 5500 or to 5005 in the js script
Since you specified the socketio configuration in your credentials file, Rasa Core automatically starts the SocketIO Input Channel which the script on your website then connects to.
To add NLU you have to options:
Specify the trained NLU model with -u <path to model> in your Rasa Core run command
Run a separate NLU server and configure it using an endpoint configuration. This is explained here in depth
The Rasa Core documentation might also help you.

In order to have a web channel, you need to have a front-end which can send and receive chat utterances. There is an opensource project by scalableminds. Look at the demo first
demo
To integrate your Rasa bot with this chatroom, you can install the chatroom project as shown in the below Github project. It works with latest 0.11 Rasa version as well.
Chatroom by Scalableminds

You are facing a dependency issue, look for what version of rasa you are using and what version of web-chat.
webchat doesn't support rasa version 2+

Related

Botium CLI tests for rasa chatbot passing for any random convo (that should not pass)

I'm trying to setup Botium CLI with my rasa chatbot for automated integration testing and dialog flow tests. However, the botium framework passes tests that do not describe a conversation flow that would be possible with my chatbot.
I'm using it with botium-connector-rasa and this is my botium.json config file:
{
"botium": {
"Capabilities": {
"PROJECTNAME": "change me later",
"CONTAINERMODE": "rasa",
"RASA_MODE": "DIALOG_AND_NLU",
"RASA_ENDPOINT_URL": "http://localhost:8080/"
},
"Sources": {},
"Envs": {}
}
}
When I try to run botium-cli pointing --convos to my folder with the .convos.txt files, it passes the tests even if they should have failed.
.convo.txt file:
Test case 02: Robots' hell
# me
random question
# bot
random answer
Command used for running the tests:
botium-cli run --config botium.json --convos ./convos/
The output is this
What is going on? Why is botium passing my random tests when it should've failed these tests?
I've tried to talk with the bot using the emulator and if i run botium-cli emulator it works properly and I can communicate with my chatbot as expected.
The issue was in the .convo.txt files' syntax.
I just had to remove the spaces between the # and the me/bot. The provided example convo should look like this instead:
Test case 02: Robots' hell
#me
random question
#bot
random answer

GitLab health endpoint before integrating code

I’m new to deploying ML models and I want to deploy a model that contains several modules, each of which consist of “folders” containing some data files, .py scripts and a Python notebook.
I created a project in GitLab and I’m trying to follow tutorials on FastAPI since this is what I’m gonna be using. But I’ve been told that before I start integrating the code, I need to set up a health endpoint.
I know about the request curl "https://gitlab.example.com/-/health", but do I need to set up anything? Is there anything else I need to do for the project setup before doing the requirements.txt, building the skeleton of the application etc.?
It depend totaly of your needs, there is no health endpoint implemented natively in fastapi.
But I’ve been told that before I start integrating the code, I need to set up a health endpoint.
not necessarly a bad practice, you could start by listing all your futur health checks and build your route from there.
update from comment:
But I don’t know how to implement this. I need a config file? I’m very new to this.
From what i understand you are very new to python api so you should start by following the official fastapi user guide. You can also follow fastapi first steps from this.
Very basic one file project that run as is:
# main.py
from fastapi import FastAPI
app = FastAPI()
#app.get("/health")
async def root():
return {"message": "Alive!"}
Remember that the above is not suitable for production, only for testing/learning purposes, to make a production api you should follow the official advanced user guide and implement something like the following.
more advanced router:
You have this health lib for fastapi that is nice.
You can make basic checks like this:
# app.routers.health.py
from fastapi import APIRouter, status, Depends
from fastapi_health import health
from app.internal.health import healthy_condition, sick_condition
router = APIRouter(
tags=["healthcheck"],
responses={404: {"description": "not found"}},
)
#router.get('/health', status_code=status.HTTP_200_OK)
def perform_api_healthcheck(health_endpoint=Depends(health([healthy_condition, sick_condition]))):
return health_endpoint
# app.internal.health.py
def healthy_condition(): # just for testing puposes
return {"database": "online"}
def sick_condition(): # just for testing puposes
return True

The connection problem in the graphql-flutter mobile app and the server created by Graphene-Django

I have a simple graphql-server created using the Graphene-Django package. Now I can test the query and mutation successfully in the desktop browser at http://127.0.0.1:8000/graphql.
For testing the query and mutation in the mobile app, I made a simple flutter mobile app using the graphql_flutter package. My flutter app works properly with Hasura and Heroku graphql endpoint. But it doesn't work with my Graphene-Django graphql endpoint. when I try to run my mobile app, it gives an error message:
ClientExceptation: Failed to connect to http://127.0.0.1:8000/graphql.
Please help me for solving the problem. Thank you so much.
I solve above mentioned problem.I exempt my Graphql endpoint from CSRF protection by wrapping the GraphQLView with the csrf_exempt decorator in urls.py file in django project, just same as this (see the source ):
from django.conf.urls import url, include
from django.contrib import admin
from django.views.decorators.csrf import csrf_exempt
from graphene_django.views import GraphQLView
from cookbook.schema import schema
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^graphql$', csrf_exempt(GraphQLView.as_view(graphiql=True,schema=schema))),
]
it should be mention that i used 'http://my-IPv4-address:8000/graphql' in flutter app for successfully connection after above mentioned modification in CSRF Protection settings.
for achieving IPv4-address follow this guide. after that, i added my-IPv4-address to ALLOWED_HOSTS in settings.py file like this:
ALLOWED_HOSTS = ['192.168.x.xxx', 'localhost', '127.0.0.1']
and finally for running graphene-django server i use this command in cmd console:
(env) python ./manage.py runserver 0.0.0.0:8000

Not Found http://localhost:3000/openapi.json -loopback 3

Please help me,I'm stuck at this point.
Steps done succesfully,
1)Install loopback 3
2)datasource is created ,generated models from mysql and generator script
3)updated model-config.json
4)loopback has run with no errors
Web server listening at: http://localhost:3000
Browse your REST API at http://localhost:3000/explorer
---This was command prompt
5)when trying to access that Not Found http://localhost:3000/openapi.json
LoopBack 3 does not support OpenAPI v3, we support the older v2 specification only (it's usually referred to as Swagger).
The API spec in Swagger JSON format is available at the following URL:
http://0.0.0.0:3000/explorer/swagger.json
Please check out the recently announced LoopBack 4 version if you are interested in OpenAPI v3: the announcement and the new website.
I faced same error
Clear your browser cache then run URl
http://localhost:3000/explorer

How to integrate Ambari REST API for cluster monitoring examples

I have a use case to integrate and Import Ambari alerts that's getting generated in Ambari Web interface , to the centralized monitoring environment we are using for managing clusters.I am using HDP . Do we have any detailed documentation/Steps/ about how to do this. Here are some example that I want to accomplish
How to make a REST API call to see if HDFS file system if filled and uses is more than 90 % or how to check if if one of service is down like HDFS/HBASE is not working and have raised alarm in Ambari GUI .
Checkout this page for links to the Ambari REST API for alerts:
https://cwiki.apache.org/confluence/display/AMBARI/Alerts
Also checkout slides 4-20 in this SlideShare, particularly slide 13 highlights the Alerts REST API:
http://www.slideshare.net/hortonworks/apache-ambari-whats-new-in-200