Our company already has as a conversational service for intent matching and entity parsing. Let's call this service "Charlie".
If we were to integrate Google Home with our service, would we have to repeat all of our existing queries in the action package or is there way to have a catch-all query so when we say to "talk to Charlie", Google Home forwards future utterances to our Charlie service?
Take a look at the Actions SDK which gives you access to the raw user query and then can do your own NLU and generate the expected JSON response to keep the conversation going.
If you use the Node.js client library then you can use the assistant.getRawInput() method.
See the sample app for complete logic: Github - actionssdk-say-number-nodejs
Related
We have integrated IBM Watson Assistant skill/workspace with a Facebook page using the Watson features. We did this using an integrated approach from Virtual Assistants tab.
We are able to get the response in Facebook Messenger from Watson skill/workspace FAQS. Now we want to add a few more questions to skill/workspace and get the response from a database.
We know that we can use IBM Cloud Functions to get DB data and respond back with the data, but Cloud Functions action types (web_action and cloud_function or server) incur a cost, hence we are looking for another approach.
We have our own APIs developed for the DB and want use those in Watson Assistant dialogue node actions. Please let us know how we can add it in actions and get a response from the API without using client application/cloud functions.
Note: we haven't developed any application for this chatbot, we directly integrated Watson skill/workspace with the Facebook page and trying to call API calls wherever we require them from the dialogue nodes.
As you can see, IBM Watson Assistant allows to invoke three different types of actions from a dialog node.
client,
server (cloud_function),
web_action.
Because for cloud_function and web_action the action is hosted as Cloud Function on IBM Cloud, the computing resources are charged. For type client, your app would handle the API call and the charges depend on where your app is hosted. Thus, there are always costs.
What you could do is to write a wrapper function that is deployed as web_action or cloud_function. Thus, there isn't much of computing resource needed and the charges would be minimal. But again, independent of the action type, there are always costs (maybe not charges) - one way or another...
I am trying to integrate with google assistant/google home using Dialogflow and we are hitting some issues.
The issue is the following.
I have a google actions project that I have created which is linked to a corresponding dialogflow project. I have enabled the v2 APIs on both the dialogflow project and the integration of the dialogflow project and the google actions project.
The dialogflow project has intents that call a webhook for fulfillment. We have a service setup that responds to the webhook APIs. This service returns responses in V2 format including rich messages (like card, quick_replies, and carousel_select) under the key fulfillmentMessages:
However it appears that when dialogflow forwards this information to the google actions, it is not passing any of this information. On the other hand if I include a key called fulfillMentText in my service response, then dialogflow will forward that information to google actions as
[{"simpleResponse":{"textToSpeech":"Nice to meet you, Bob!"}}]}
It is not clear to me based on reading available docs, what I need to do so that dialogflow will also propagate the contents of fulfillmentMessages to google actions.
Thank you in advance.
The message objects for Actions on Google are different than the generic ones available for the other platforms Dialogflow supports. You need to also send the Actions on Google compatible messages in order to have them visible for the Assistant.
If you're using the dialogflow-fulfillment library, you can import the actions-on-google objects and add them to to the response and the library will handle them.
Don't worry about mixing the rich media types - only those that are appropriate for each platform will show.
I have built a chatbot having NLP and AI features in Java language. I have built restful webservices for interaction with the chatbot's AI engine.
The rest API will send the user's query and in return will get an answer by the bot.
I want to integrate this chatbot with Skype. As in there should be a chatbot account and then whenever a person types their query, it should be sent to my server via rest api call and then in turn the response message should be shown in skype chat window.
In my findings, I have seen skype integration with bots built bu Microsoft Bot Framework. Can anyone suggest how can I integrate this custom bot.
If anybody feels I haven't added right tags to reach the exact audience, please add the tags.
I think you need to create a MS Chatbot (in Azure) which allows to deploy it on Skype (and other channels ie Teams), then define a WebHook to invoke your service.
This service needs to "speak" with the MS Bot framework, so you will need to serialise/deserialise the payload, but in the backend you existing endpoint can be used as you need.
Hope it helps.
I am trying to forward Google Smart Home events to my Dialogflow fulfillment service. I am creating 3 intents with no input or output contexts set, no training phases and with the following events:
action_devices_SYNC
action_devices_EXECUTE
action_devices_QUERY
See also https://imgur.com/a/4eN9S.
Is that correct? I can't find confirmation in the docs, so that's why I am asking it here.
reasoning
The reason why I asked about connecting Google Smart Home with my Dialogflow endpoint is that I already have that endpoint in place. I hoped I could do something similar as in https://stackoverflow.com/a/49119822/9038652, where I bound a Dialogflow intent to the actions_intent_OPTION event.
There isn't a reason to use Dialogflow to do smart home fulfillment, and it's actually not possible.
Dialogflow is great for taking unstructured user utterances and making sense of them. However, with smart home, Google handles all of the NLU and parsing. You, as the integration, will just receive a JSON request and will be expected to provide a JSON response.
So you will skip using Dialogflow and instead just build your webhook to parse the intents and give a valid response.
Dialogflow's service does not have a way to take in an intent name and expose a single endpoint URL that can be called by the Google Assistant. It also does not have integration with an OAuth server to do the account linking step.
i wrote a smart home skill for Alexa, which interacts with a bunch of REST apis i created. It integrated with my OAUTH2 server, all good.
I've tried reading the limited Actions on Google documentation, and looked at the example Node app on github, and i'm stumped.
The action.json seems to take a single URL - i'm unclear on what that should be, the example takes the easy route of passing a single url, then deciding on sync/execute etc as url param in the index.js, which I don't want to do.
Can someone please explain how this works for them? I see a bunch of other people struggling on here, so i take some comfort that i may be thick, but i'm not alone!
Since you developed an Alexa smart home skill, you should know the skill adapter hosted as a Lambda function.
The example Node.js program works just like the skill adapter.
When Google Home invokes your smart home app, it sends the request to the url in the action.json. You can use the example Node.js app for this url, then write your function to handle sync/execute requests. This part should be very similar to the REST APIs you created for Alexa.