Why my rasa model name is starting from the nlu rather than the date - chatbot

I have trained my rasa model and the name of the rasa model is starting from the nlu but it's should start from the date
I am expecting that my rasa model should start from the number and not from the nlu-number***

Related

Building graph from OSM and GTFS data using OTP

I have a few questions regarding building graph with OTP (OpenTripPlanner) containing GTFS data and OSM data. I am using GTFS and OSM data for Berlin.
I read here that OTP version 2 does not support isochrone and surface analysis features. This is from November 2021. Has this changed since then?
Using the same GTFS and OSM data I am able to build the graph with OTP v2 but not with OTP v1.5. I get an error as shown below while building the graph with OTP v1.5. It seems a field is missing in pathways.txt file. However, before changing any of the GTFS files I would like to know if anyone has had this error and does adding a field will resolve the error?
Exception in thread "main" org.onebusaway.csv_entities.exceptions.CsvEntityIOException: io error: entityType=org.onebusaway.gtfs.model.Pathway path=pathways.txt lineNumber=2
Caused by: org.onebusaway.csv_entities.exceptions.MissingRequiredFieldException: missing required field: pathway_type
Command used to build the graph:
java -Xmx4G -jar otp-1.5.0-shaded.jar --build graphs/current --port 9090
I am also trying to start the OTP v1.5 server by loading an already build graph using the switch --load but, it throws error Unknown option --load. How can I load an already build graph to start the otp server in version 1.5?

Watson machine learning deployment takes too much time

I trained a model using watson machine learning service. The training process has completed so I ran these command lines to deploy it:
bx ml store training-runs model-XXXXXXX
I get the output with the model ID
Starting to store the training-run 'model-XXXXXX' ...
OK
Model store successful. Model-ID is '93sdsdsf05-3ea4-4d9e-a751-5bcfbsdsd3391'.
Then I use the following to deploy it :
bx ml deploy 93sdsdsf05-3ea4-4d9e-a751-5bcfbsdsd3391 "my-test-model"
The problem is that I'm getting an endless message saying:
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
Checking if content upload is complete ...
When I check in COS result bucket the model size is ~25MB so it shouldn't be that long to deploy. Am I missing something here ?
Deploying the same model using Python Client API:
from watson_machine_learning_client import WatsonMachineLearningAPIClient
client = WatsonMachineLearningAPIClient(wml_credentials)
deployment_details = client.deployments.create( model_id, "model_name")
This showed me very quickly that there is an error with the deployment. The strange thing is that the error doesn't pop up when deploying with command line interface (CLI).

Rasa WebChat integration

I have created a chatbot on slack using Rasa-Core and Rasa-NLU by watching this video : https://vimeo.com/254777331
It works pretty well on Slack.com. But what I need is to add this to our website using a code snippet. When I looked up on that, I was able to find out that RASA Webchat (https://github.com/mrbot-ai/rasa-webchat : A simple webchat widget to connect with a chatbot ) can be used to add the chatbot to the website. So, I pasted this code on my website inside the < body > tag.
<div id="webchat"/>
<script src="https://storage.googleapis.com/mrbot-cdn/webchat-0.4.1.js"></script>
<script>
WebChat.default.init({
selector: "#webchat",
initPayload: "/get_started",
interval: 1000, // 1000 ms between each message
customData: {"userId": "123"}, // arbitrary custom data. Stay minimal as this will be added to the socket
socketUrl: "http://localhost:5500",
socketPath: "/socket.io/",
title: "Title",
subtitle: "Subtitle",
profileAvatar: "http://to.avat.ar",
})
</script>
“Run_app.py” is the file which starts the chatbot ( It’s available in the video : https://vimeo.com/254777331 )
Here is the code of Run_app.py :
from rasa_core.channels import HttpInputChannel
from rasa_core.agent import Agent
from rasa_core.interpreter import RasaNLUInterpreter
from rasa_slack_connector import SlackInput
nlu_interpreter = RasaNLUInterpreter('./models/nlu/default/weathernlu')
agent = Agent.load('./models/dialogue', interpreter = nlu_interpreter)
input_channel = SlackInput('xoxp-381510545829-382263177798-381274424643-a3b461a2ffe4a595e35795e1f98492c9', #app verification token
'xoxb-381510545829-381150752228-kNSPU0X7HpaS8oJaqd77TPQE', # bot verification token
'B709JgyLSSyKoodEDwOiJzic', # slack verification token
True)
agent.handle_channel(HttpInputChannel(5004, '/', input_channel))
I want to connect this python chat-bot to the “Rasa-webchat” instead of using Slack. But I don’t know how to do that. I tried looking everywhere, But I couldn’t find anything helpful on the internet. Can someone help me? Thank you.
In order to connect Rasa Core with your web chat do the following:
Create a credentials file (credentials.yml) with the following content:
socketio:
user_message_evt: user_uttered
bot_message_evt: bot_uttered
Start Rasa Core with the following command (I assume you have already trained your model):
python -m rasa_core.run \
--credentials <path to your credentials>.yml \
-d <path to your trained core model> \
-p 5500 # either change the port here to 5500 or to 5005 in the js script
Since you specified the socketio configuration in your credentials file, Rasa Core automatically starts the SocketIO Input Channel which the script on your website then connects to.
To add NLU you have to options:
Specify the trained NLU model with -u <path to model> in your Rasa Core run command
Run a separate NLU server and configure it using an endpoint configuration. This is explained here in depth
The Rasa Core documentation might also help you.
In order to have a web channel, you need to have a front-end which can send and receive chat utterances. There is an opensource project by scalableminds. Look at the demo first
demo
To integrate your Rasa bot with this chatroom, you can install the chatroom project as shown in the below Github project. It works with latest 0.11 Rasa version as well.
Chatroom by Scalableminds
You are facing a dependency issue, look for what version of rasa you are using and what version of web-chat.
webchat doesn't support rasa version 2+

How to delete training definitions?

I am trying out Watson Studio visual modeler for neural networks. During learning I have tried a few different designs and I have published several training definitions.
If I navigate to Experiment Builder, I see a lot of definitions some are old and no longer needed.
How can I delete old training definitions? (Ideally from the Watson Studio UI)
The Watson Machine Learning python client doesn't support deleting training run definitions. WML's python client API shows what options are supported. The WML team is working to add such delete functionality though.
In the meantime, you can use WML's CLI tool to execute bx ml delete:
NAME:
delete - Delete a model/deployment/training-run/training-definitions/experiments/experiment-runs
USAGE:
bx ml delete models MODEL-ID
bx ml delete deployments MODEL-ID DEPLOYMENT-ID
bx ml delete training-runs TRAINING-RUN-ID
bx ml delete training-definitions TRAINING-DEFINITION-ID
bx ml delete experiments EXPERIMENT-ID
bx ml delete experiment-runs EXPERIMENT-ID EXPERIMENT-RUN-ID
Use bx ml list to get details on the items that you wish to delete:
Actually, the python client supports deleting training definitions.
You just call client.repository.delete(artifact_uid). The same method can be used to delete any item from repository (model, training_definition, experiment). It is documented in python client docs btw:
delete(artifact_uid)
Delete model, definition or experiment from repository.
Parameters: artifact_uid ({str_type}) – stored model, definition, or experiment UID
A way you might use me is:
>>> client.repository.delete(artifact_uid)
Training_run is completely different thing than training_definition.
You can also remove it if needed:
delete(run_uid)
Delete training run.
Parameters: run_uid ({str_type}) – ID of trained model
A way you might use me is:
>>> client.training.delete(run_uid)
You can also remove the experiment_run if needed by calling:
delete(experiment_run_uid)
Delete experiment run.
Parameters: experiment_run_uid ({str_type}) – experiment run UID
A way you might use me is
>>> client.experiments.delete(experiment_run_uid)
Please refer to python client docs for more details: http://wml-api-pyclient-dev.mybluemix.net/

Use azure data factory Updating Azure Machine Learning models

When I use data factory to update Azure ML models like the document said (https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-azure-ml-update-resource-activity),
I faced one problem:
The blob reference: test/model.ilearner has an invalid or missing file extension. Supported file extensions for this output type are: ".csv, .tsv, .arff".'.
I have searched the problem and found this solution:
https://disqus.com/home/discussion/thewindowsazureproductsite/data_factory_create_predictive_pipelines_using_data_factory_and_machine_learning_microsoft_azure/ .
But my linked service for the outputs of training service pipeline and update service pipeline are already different.
How can I solve this problem?