i want to know if there is any way in which i can add/modify/delete train intents directly (for eg: through and http call)through code in API.ai so that i can automate the intent handling process?
Yes, API.AI has an SDK. Take a look here
Related
We are used to that any NLU service integration with Botkit should be implemented as middleware. This is a fairly obvious approach.
Botkit Studio has added LUIS support out of the box recently. And that approach confuses me.
Depending on the resolved intent, I want to make an API call, passing extracted entities to the endpoint. Thus, the call chain looks like this:
Botkit App [calls Studio API] → Botkit Studio [sends message to the NLU service] → LUIS [resolves intent and entities] → Botkit Studio [finds convo object based on intent trigger and returns convo to the bot] → Botkit App [makes an API call from skill] → API [returns response to the bot] → Botkit App [sends response text to the chat client]
It makes me feel that I'm using it wrong. How do you use the new NLU feature for cases like this?
Thank you.
You can use LUIS directly as a middleware INSTEAD of or IN ADDITION to using the cloud. This can be useful if you wanted to say, only process content that does not result in a hear match to the NLP provider. The inbuilt LUIS support is designed for people that do no want to or are unable to code this kind of logic, and allows you to just work with Studio's trigger and console to help train the NLP provider.
You might want to check this out if you have not seen it, it takes you through how responses are evaluated in your studio application, and where you can manipulate that processing:
https://botkit.ai/docs/readme-pipeline.html
I'm looking for a chatbot for testing purpose. I found most chatbot related pages are chatbot builder, but what I want is a existing chatbot service with API which I can used for my project testing. Google Assistance and Alexa could be ideal, but is there anything for general chat? Child chat robot would be ideal.
I'm pretty sure there isn't for obvious reasons.
None of the general chatbots are perfect, but here are something you can use (or based on them).
Julie on botlibre.com
Another you can try, https://www.personalityforge.com/
mitsuku could be found from https://www.pandorabots.com/
In order to build a Google Assistant app, Google provides two different APIs as part of their node.js actions-on-google library :
ActionsSdkApp
DialogflowApp
There have a common interface, but I don't understand what the difference is between the two and why I would use one or the other.
In short, these two objects provide similar (although not identical) methods to handle requests and provide results for two default ways Google allows you to build an Action for the Assistant.
The DialogflowApp object is the one you will likely use for most purposes. It is meant to work with the Dialogflow tool, letting it handle the Natural Language Processing (NLP) components and passing the results, where appropriate, to your webhook. It provides a few methods that are specific to Dialogflow features, such as Contexts, and maps other things to the response format that Dialogflow expects.
The ActionsSdkApp is meant to be used if you are using your own NLP and your webhook is getting things directly from Google (without using Dialogflow). If you need to build an actions.json file, you're using the Actions SDK.
Both have common methods and idioms, such as app.ask() and app.tell() and mapping app.data to session storage and so forth, even if the details of implementing these are different for each type.
You should be using the one that matches the tool that you're using. For most new users - that will likely be Dialogflow and the DialogflowApp object.
Update
Note that the API in the question, the specific objects asked about, and the specific methods talked about in my answer are for the previous version of the library.
The concept of when to use the ActionSDK vs Dialogflow objects in the current library still hold, so the concept behind this question and answer are still valid, but the technical details have changed.
Update - Jun 2020
The library in question is now deprecated, since it no longer works with the current version of Actions on Google (Actions Builder/SDK AoG v3). It still works with Dialogflow (which uses AoG v2) and if you're still using the AoG v2 Actions SDK.
IN SIMPLE TERMS
Use the Actions SDK for one-shot apps. These are apps that provide the required answer directly after being invoked and then stop. They give you just this one result. A typical example would be setting a timer to ten minutes.
Use the Dialogflow for all other apps, for those that are really conversational, where there are multiple paths to follow and where you want your user to provide more information during the conversation.
Does anyone know the reasoning behind this language and framework choice? Seems like something like python would be better suited for machine learning, NLP type problems.
If you want, you can certainly write your Action fulfillment in Python. Although Google provides a convenience library in node.js, the fulfillment is done via a webhook that receives JSON and is responsible for sending JSON back to Google.
The client library does not provide NLP but is a convenience for the Conversation API for actions fulfillment.
I am working on a project and I need to use this tow api; jtapi, gjtapi. The problem is
both gjtapi and jtapi project seem dead. Is there a new similar java api?
JTapi is a specification that is implemented by vendors, as CISCO or AVAYA. In my experience, there is no an generic API for JTapi, because each provider customizes their own implementation according at their telephony platform.
If you want a "generic" JTapi you should review this link, that refer to an Asterisk JTapi:
http://asterisk-jtapi.sourceforge.net/