Is it possible yet to initiate a phone call? E.g. if I'm making a branch finder action a dialogue might go like:
"Hi, where's my nearest store?"
"Your nearest store is our Oxford Street branch, at 300 Oxford St, Marylebone."
"Call it"
"Sure"
It then initiates a call to the store, like an Android app using an ACTION_DIAL intent.
I would think something like this should be possible, especially considering the current devices supporting Assistant are phones and Google Home, both of which can make calls (I guess future devices with assistant built in might not, but then there can be a check like app.phoneCapabilities). I've tried using .addSuggestionLink with a tel: address with no luck.
I actually made a dodgy work around for this, if anyone comes back to this and is interested.
You can suggest a webpage URL, which can be a page which has a tel: link from there. Either using server side work or just simple JavaScript (mine uses simple JavaScript), you can update the link.
My link is below - feel free to use it, I use it in my app. It's pretty basic, the documentation is in the comments:
https://domdomegg.github.io/linkgenerator?href=tel%3A%2B442070313000&bgcolor=607d8b&buttontext=Click%20to%20call%20Google
For starters, the Google Home cannot (yet) make calls. That feature was announced at I/O and will be rolling out later this year. It is not yet known if there will be API access to that feature when it does roll out. (There is certainly potential for abuse of the feature, although there are some ways that can mitigate that abuse.)
I haven't tested, but I'm a little surprised that the tel: url form didn't work since I thought that would just launch an intent on Android (tho I don't know how iOS would handle it) and tel: goes to the dialer intent.
You can show a call button which will redirect to the specified number on the dialer app.
Here's a way to do that from fulfillment response:
"buttons": [
{
"title": "Call",
"openUrlAction": {
"url": "tel:+91123456789",
"androidApp": {
"packageName": "com.android.phone"
},
"versions": []
}
}
]
Add this JSON to your response and it will show a button which will redirect to default call app and shows +91123456789 number filled.
EXTRA
Similarly, if you want to send mail then you can add:
{
"title": "Send Mail to Jay",
"openUrlAction": {
"url": "mailto:Jp9573#gmail.com",
"androidApp": {
"packageName": "android.intent.extra.EMAIL"
},
"versions": []
}
}
Related
We have a chatbot for our website today, that is not build using Google technology. The bot has a JSON REST API where you can send the question to and which replies with the corresponding answers. So all the intents and entities are being resolved by the existing chatbot.
What is the best way to wrap this functionality in Google Assistant / for Google Home?
To me it seems I need to extract the "original" question from the JSON that is send to our webservice (when I enable fullfilment).
But since context is used to exchange "state" I have to find a way to exchange the context between the dialogflow and our own chatbot (see above).
But maybe there are other ways ? Can it (invoke our chatbot) be done directly (without DialogFlow as man in the middle) ?
This is one of the those responses that may not be enough for someone who doesn't know what I am talking about and too much for someone who does. Here goes:
It sounds to me as if you need to build an Action with the Actions SDK rather than with Dialog flow. Then you implement a text "intent" in your Action - i.e. one that runs every time the user speaks something. In that text intent you ask the AoG platform for the text - see getRawInput(). Now you do two things. One, you take that raw input and pass it to your bot. Two, you return a promise to tell AoG that you are working on a reply but you don't have it yet. Once the promise is fulfilled - i.e. when your bot replies - you reply with the text you got from your bot.
I have a sample Action called the French Parrot here https://github.com/unclewill/french_parrot. As far as speech goes it simply speaks back whatever it hears as a parrot would. It also goes to a translation service to translate the text and return the (loose) French equivalent.
Your mission, should you choose to accept it, is to take the sample, rip out the code that goes to the translation service and insert the code that goes to your bot. :-)
Two things I should mention. One, it is not "idiomatic" Node or JavaScript you'll find in my sample. What can I say - I think the rest of the world is confused. Really. Two, I have a minimal sample of about 50 lines that eschews the translation here https://github.com/unclewill/parrot. Another option is to use that as a base and add code to call your bot and the Promise-y code to wait on it to it.
If you go the latter route remove the trigger phrases from the action package (action.json).
So you already have a Backend that process user inputs and sends responses back and you want to use it to process a new input flow (coming from Google Assistant)?
That actually my case, I've a service as a Facebook Messenger ChatBot and recently started developing a Google Home Action for it.
It's quite simple. You just need to:
Create an action here https://console.actions.google.com
Download GActions-Cli from here https://developers.google.com/actions/tools/gactions-cli
Create a JSON file action.[fr/en/de/it].json (choose a language). The file is your mean to define your intents and the URL to your webhook (a middleware between your backend and google assistant). It may look like this:
{
"locale": "en",
"actions": [
{
"name": "MAIN",
"description": "Default Welcome Intent",
"fulfillment": {
"conversationName": "app name"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"Talk to app name"
]
}
}
}
],
"conversations": {
"app name": {
"name": "app name",
"url": "https://your_nodejs_middleware.com/"
}
}
}
Upload the JSON file using gactions update --action_package action.en.json --project PROJECT_ID
AFAIK, there only a Node.js client library for Actions-on-google https://github.com/actions-on-google/actions-on-google-nodejs that why you need a Node.js middleware before hitting your backend
Now, user inputs will be sent to your Node.js middleware (app.js) hosted at https://your_nodejs_middleware.com/ which may look like:
//require express and all required staff to build a Node.js server,
//look on internet how to build a simple web server in Node.js
//if you a new to this domain. const {
ActionsSdkApp } = require('actions-on-google');
app.post('/', (req, res) => {
req.body = JSON.parse(req.body);
const app = new ActionsSdkApp({
request: req,
response: res
});
// Create functions to handle requests here
function mainIntent(app) {
let inputPrompt = app.buildInputPrompt(false,
'Hey! Welcome to app name!');
app.ask(inputPrompt);
}
function respond(app) {
let userInput = app.getRawInput();
//HERE you get what user typed/said to Google Assistant.
//NOW you can send the input to your BACKEND, process it, get the response_from_your_backend and send it back
app.ask(response_from_your_backend);
}
let actionMap = new Map();
actionMap.set('actions.intent.MAIN', mainIntent);
actionMap.set('actions.intent.TEXT', respond);
app.handleRequest(actionMap); });
Hope that helped!
Thanks for all the help, the main parts of the solution are already given, but I summarize them here
action.json that passes on everything to fullfilment service
man in the middle (in my case IBM Cloud Function) to map JSON between services
Share context/state through the conversationToken property
You can find the demo here: Hey Google talk to Watson
I am developing a Google Smart Home App, and I follow the official development document.
Create my project in Google Console
'gactions update --action_project action.json --project {myproject}'
Complete the necessary information,including App Information, Account Linking;
'gactions test --action_project action.json --project {myproject}'
I have tried many times, even I used the other accout, and created it. But the result was same that my smart home app still not appear in my google home.
Someone said they created their app, and it would appear in Home Control section like [test]{project_name} after 'gactions test'. But it not work for me. It is very frustrated. And This step has blocked my further work for many days.
Further more, I want to confirm the following question:
When I created my smart home app, the section is like this:
Actions added from Actions SDK
Actions: (this place is emtpy, is it normal? or lack of something like agent or else)
The Smart Home App in test mode, it is required that voice-only conversations with the Google Home speaker, or directly appear in Google Home App after 'gactions test'?
Any help will be appreciated!
My action.json: { "actions": [{ "name": "actions.devices", "deviceControl": { }, "fulfillment": { "conversationName": "automation" } }], "conversations": { "automation" : { "name": "automation", "url": "https://xxxxx" } } }
running it the first time could be a bit painful. Below is what I do to get it working.
Make sure you have filled in all details on the action console.
Click on the Test button to start your app.
From the simulator, type Talk to . It might ask you to login.
If you can see the response from your app. Then it should work on your google home.
Please note that your test app will expire if you leave it idle for too long. You need to click on the test button to run it again or your google home won't be able to lunch your app.
Also, in your action.json, if you are pointing your intend to a webhook, please make sure your webhook is accessible from google.
my boss give me the task to creata a chatbot, not made with Telegram or Slack, in which use Watson Conversation service.
More, the chat bot has to be inserted inside a web page, then it has to be embeddable in html as javascript.
Are there anyone who knows other good platforms to performe these tasks?
Thanks for any help.
After replying in comments and I had another look and realised Microsoft Bot Framework could work with minimal dev investment (in the beginning).
https://docs.botframework.com/en-us/support/embed-chat-control2/
This little guy is fun. You should give him a try.
http://www.program-o.com/
I strongly suggest you to build more an assistant than a simple bot, using a language understanding service tool like Microsoft LUIS, that is part of the Microsoft Cognitive Services.
You can then combine this natural language processing tool with bot SDK like MicroSoft Botframework mentioned above, so that you could easily run queries in natural language, parse the response in a dialog in entities and intents, and provide a response in natural language.
By an example, a parsed dialog response will have something like this json
{
"intent": "MusicIntent",
"score": 0.0006564476,
"actions": [
{
"triggered": false,
"name": "MusicIntent",
"parameters": [
{
"name": "ArtistName",
"required": false,
"value": [
{
"entity": "queen",
"type": "ArtistName",
"score": 0.9402311
}
]
}
]
}
]
}
where you can see that this MusicIntent has a entity queen of type ArtistName that has been recognized by the language understanding system.
that is, using the BotFramework like doing
var artistName=BotBuilder.EntityRecognizer.findEntity(args.entities, Entity.Type.ArtistName);
A good modern bot assistant framework should support at least a multi-turn dialog mode that is a dialog where there is an interaction between the two party like
>User:Which artist plays Stand By Me?
(intents=SongIntent, songEntity=`Stand By Me`)
>Assistant:The song `Stand by Me` was played by several artists. Do you mean the first recording?
>User:Yes, that one!
(intents=YesIntent)
>Assistant: The first recording was by `Ben E. King` in 1962. Do you want to play it?
>(User)Which is the first album composed by Ben E.King?
(intents=MusicIntent, entity:ArtistName)
>(Assistant) The first album by Ben E.King was "Double Decker" in 1960.
>(User) Thank you!
(intents=Thankyou)
>(Assistant)
You are welcome!
Some bot frameworks use then a WaterFall model to handle this kind of language models interactions:
self.dialog.on(Intent.Type.MusicIntent,
[
// Waterfall step 1
function (session, args, next)
{
// prompts something to the user...
BotBuilder.Prompts.text(session, msg);
},
// waterfall step 2
function (session, args, next)
{
// get the response
var response=args.response;
// do something...
next();//trigger next interaction
},
// waterfall step 3 (last)
function (session, args)
{
}
]);
Other features to consider are:
support for multi-languages and automatic translations;
3rd party services integration (Slack, Messenger, Telegram, Skype, etc);
rich media (images, audio, video playback, etc);
security (cryptography);
cross-platforms sdk;
I've started to do some work in this space using this open source project called Talkify:
https://github.com/manthanhd/talkify
It is a bot framework intended to help orchestrate flow of information between bot providers like Microsoft (Skype), Facebook (Messenger) etc and your backend services.
I'd really like people's input to see if how it can be improved.
I was wondering if somebody can help me to solve this problem. I am trying to use FB.AppRequest() in Facebook SDK for Unity to implement an Invite feature . This is the code which I use.
if(FB.IsLoggedIn)
{
FB.AppRequest(
message: "Let's eat and be prosperous!",
title: "Let's eat and be prosperous!",
callback: InviteCallback
);
}
// ...
void InviteCallback(FBResult response)
{
// print response to console
}
The invitation dialog which I get can be seen here (link to Imgur). There is no "Invite" label on those buttons, and unsurprisingly, clicking them does not send any invitation. However, I can see the FBResult data, which is in the following format:
{
"request": "ABCD",
"to":
[
"EFGH",
"IJKL"
]
}
(more or less, since I haven't found a way to print new lines to Firebug console)
Additional information:
The result is the same regardless of the Sandbox setting.
The Unity version is 4.3.0f4
The Facebook SDK for Unity version is 4.3.4
The binary is hosted on an intranet server.
The Invite functionality in the Friend Smash example, hosted on the same server, doesn't work either. However, this is before the latest Friend Smash update (11/11/2013), whose Facebook functionalities I can't get to work yet.
Other Facebook functionalities (e.g. Init, Feed, API) work well.
I can't find any information about this on the internet. There are other questions about the Invite feature not working, but without the Facebook SDK for Unity, so I am not sure how they can be helpful to me.
Thanks a lot!
Just to clarify: the envelope button sends the invite, and it does so immediately when it is clicked.
If your app is in sandbox mode, people won't get notified when the request is sent. Look for the request in https://www.facebook.com/appcenter/requests as the recipient and see if the request shows up there.
If this still doesn't work, can you send me your fbresult data? unity-sdk#fb.com. thanks!
This is the first time I'm trying to develop a facebook app so sorry in advance if my question is too naive.
What I need to do is making a chat-like facebook application where:
the user can write something on the wall
the app should be able to detect this event and send an HTTP request to an external web service of my own which will provide a response (text)
publish that text as a comment
the user should be able to continue the dialog by entering another comment(s) (in which case we go back to step #2)
Basically, this would be very similar to:
https://www.facebook.com/SkyscannerFlightSearch
I think one (ugly) way to do this would be making a script which searches for new wall entries/comments and posts replies in an infinite loop by using the Graph API but it's obviously sub-optimal and expensive.
Is there any way to have facebook call a certain url every time a wall post/comment is entered?
Or maybe something like Twitter's streaming API based on long-polling technique?
Am I in the right direction by assuming such kind of solutions or I'm totally missing the point?
Thanks in advance.
Giampaolo
I am working on something very similar myself.
So far i have the "loop" which can be set to any page, group or app on facebook.
SAMPLE: https://shawnsspace.com/plugins/wallfeed.php My page wall.
SAMPLE: https://shawnsspace.com/plugins/wallfeed.php?pageid=19292868552&ptype=feed&limit=40 Facebook Platform Wall.
With some perms, a form and user access_tokens i can make the wall postable. Asper UGLY - you do not need to run this in a loop, Facebook supports realtime updates and will send a response to your app when a user, or page has made a change.
MORE: http://developers.facebook.com/docs/api/realtime
Thanks for the input.
I tried to use the real-time API by using object=user and fields=feed.
If I understood the doc correctly this should result in my callback url being called (POST) every time a user writes something on my app's wall.
I received the initial GET request but never POST.
This is the current configuration:
{
"data": [
{
"object": "user",
"callback_url": "http://XXX.XXX.XXX.XX:8888/",
"fields": [
"feed"
],
"active": true
},
{
"object": "page",
"callback_url": "http://XXX.XXX.XXX.XX:8888/",
"fields": [
"picture"
],
"active": true
}
]
}
I've noticed various user comments reporting different concerns about the reliability of this API.
Also, here:
http://developers.facebook.com/docs/reference/api/page/
it says: "Note: Real-time updates are not yet supported for the total number of Page checkins."
...which I'm not sure what it means exactly.
For the record, my app's page I'm using for tests is:
http://www.facebook.com/pages/testgiamp/187148861354102?sk=wall