I'm developing a Google Action through DialogFlow and a webhook (that will run on a Nest Hub) that I want to act like this:
the user invokes the action "Hey Google, talk to ACTIONAME"
through the Default Welcome Intent ("hooked" to my web service) the Action replies to the user and open a website
app.intent('Default Welcome Intent', conv => {
conv.ask('Hi! I'm opening your site')
conv.ask(new HtmlResponse({
url: 'https://MY_IOT_SITE'
}))
})
now, the user could be "silent" for mins or hours, but I'd like to prevent Google Actions to close the ACTIONNAME and return to the clock, while until now the action closes after a couple of minutes
Is it somehow possible?
Thank you.
This is not possible. The platform intentionally places an upper bound on how long an action can run without any user input. This is done so that an action cannot occupy the device longer than expected and prevent future inputs from unintentionally getting routed to your action rather than the Google Assistant.
You can take a look at additional guidelines when developing your web app.
Since your question refers to an IoT-related website, you may want to take a look at the Smart Home reference, which provides an alternative way to let users control smart home devices with their voice or built-in graphical widgets.
Related
I'm using appium-flutter-driver & webdriverIO to automate the flutter mobile app.
I have a use case in my application where clicking on
Mail us button opens Gmail app with subject, body
Call us button opens Dailer app with phone number
I want assert/verify that gmail/phone app is opened. either one of following is fine
verifying that gmail/phone app package name
verifying the subject, content in gmail compose screen is also fine
I see here https://github.com/appium-userland/appium-flutter-driver that
await driver.switchContext('NATIVE_APP');
await (await driver.$('~fab')).click();
what is ~fab means here?
How to find elements using ID, text, class in this case and perform click, enterText, etc operations?
I'm not sure what ~fab means. But the available finders are mentioned here, with links to the documentation. The available commands are mentioned here, with links to the documentation.
An example of a finder by semantics label with a click:
element = FlutterElement(self.driver, FlutterFinder().by_semantics_label('Back'))
element.click()
An example of entering text:
driver.execute_script('flutter: enterText')
Got these examples from here.
I'm using the Media Response to play audio in my Actions SDK (Action SDK is being used as the Fulfillment tool in my Actions Console) driven Action. At the end of each audio clip, I'm using the MEDIA_STATUS callback to advance to another mp3 file in a predefined playlist. As a result, users should be able to navigate forwards / backwards.
When testing on my Google Home mini, Google Assistant on Android and Smart Display, I can intercept "next" and advance to the next audio clip (it sends a request with intent of type MEDIA_STATUS). However, I can't properly intercept "previous" Whenever I try, the audio will restart. Real devices seems to be handling this intent on it's own and does not throw any console output (as my webhook is not accessed at all).
Dialog Flow seem to handle "next" and "previous" as follow up intents, but I need to do the same without using Dialog Flow as the Fulfillment tool.
Can anyone please help with this particular problem?
Currently only a FINISHED event is supported for the Media Response. So, you would not be able to handle distinguish between next and previous.
Referring to #Leon Nicholls answer, I am really confused. I followed his Google Audioplayer tutorial/guide which links to an audioplayer template action
You can still download it, and if you unzip it, you will see code that is put there specifically to handle previous and next tracks.
Now it appears Leon is saying that it can't handle previous and next by voice (but buttons appear to work).
Here is the sample code below. There are links to tutorial actions in it, but now all the content has been removed. I simply need blind people to be able to select and play various bit of audio, too long for SSML, which must be navigable (at least next and pause/resume later) may be part of a playlist. How do I achieve this? Suggesting "Media Partner" or "Podcast" is not a solution as neither available in the UK.
// Handle the More/Yes/Next intents
app.intent(['More', 'Yes', 'Next'], (conv) => {
console.log(`More: fallbackCount=${conv.data.fallbackCount}`);
nextTrack(conv, false);
});
// Handle the Repeat/Previous intent
app.intent(['Repeat', 'Previous'], (conv) => {
console.log(`Repeat: ${conv.user.storage.track}`);
nextTrack(conv, false, true);
});
// Select the next track to play from the data
const nextTrack = (conv, intro, backwards) => {
console.log(`nextTrack: ${conv.user.storage.track}`);
let track = data[0];
// Persist the selected track in user storage
// https://developers.google.com/actions/assistant/save-data#save_data_across_conversations
if (conv.user.storage.track) {
conv.user.storage.track = parseInt(conv.user.storage.track, 10);
if (backwards) {
I have a website which links to a chatbot built on IBM Watson Assistant. There are some hyperlinks on the website that I want to trigger specific nodes/ intents the watson dialog.
Example: User clicks on "Provide feedback" link, the watson chatbot launches and based on the link the "provide_feedback" intent is recognised (thus preventing the user from needing to specify the intent after clicking the link).
Has anyone tried this before?
I also came across this requirement and want to mention another alternative here:
Instead of sending an input text that matches the intent of your desired node, you can also pass
Intents to use when evaluating the user input.doc
and tell the assistant to match it with confidence of 1.0.
I think this is a clean method, because you don't need to deal with disambiguation of your input text.
Then you don't need to send input text at all and the intent actually does not even need example phrases :-)
For example if you want to trigger a node that has the intent #provide_feedback
you can call this python example code:
send_message_to_chatbot(text="", intent="provide_feedback")
def send_message_to_chatbot(text="", intent=""):
message = assistant.message(
assistant_id=ASSISTANT_ID,
session_id=SESSION_ID,
input=MessageInput(
text=text,
intents=[RuntimeIntent(intent=intent, confidence=1.0)]
)
).get_result()
return message
Prerequisite is of course that the node is in the root branch of your dialog so it can be triggered.
The Watson Assistant service basically is used via a REST API. That API is invoked from the "Try it" pane in the workspace editor, from your dedicated application or maybe from widgets embedded into a website. The message call is used to send user input to Watson Assistant and to receive a chatbot response.
What you can do is to call the message API from your app and pass a specific term as input message. That term would match an intent and hence trigger a specific dialog node. As an example, if you have an intent "provide_feedback" defined for the phrase "user pressed feedback button" and you pass in exactly that phrase as input message, then the intent "provide_feedback" will match.
I want to start Form conversation chat in Microsoft Bot framework.
if(user says hello)
{
reply = what u want to listen hi or hello
if(user says order)
{
reply= start a formbuilder.form with order form workflow
}
if(user says hello)
{
reply= hello
}
}
My problem is the first thing i do always works
example: if i say first chat line as order it starts order form but it never goes to the normal conversation mode even if the form ends.
if i start hi then it always goes in hi mode never goes or create order form on typing order.
Need it to be dynamic
you can use below code to end your conversation when you are in a dialog or a conversation flow ends.
context.Done<object>(new object());
or
context.Done(true);
do let me know if you need any help further
Per my understanding, you want start specific dialogs while triggering different words like "hollo" for greeting dialog, "order" for form dialog.
I think there are two methods to achieve this in C#:
You can leverage Recognize intents to implement LUIS, which can identify your users' intent from their spken or textual input, or utterances. Trigger specific dialogs for each LUIS intent.
For this solution you can refer to the official document Recognize intents and entities with LUIS using a prebuilt domain for details, and refer to https://github.com/Microsoft/BotBuilder-Samples/tree/master/CSharp/intelligence-LUIS for a sample for your reference.
You also can build Global message handlers using scorables in your bot application for yourself. With which, you can route users to certain fuctionality by using words like "help," "cancel," or "start over" in the middle of a conversation when the bot is expecting a different response.
Please refer to https://github.com/Microsoft/BotBuilder-Samples/tree/master/CSharp/core-GlobalMessageHandlers for the sample for this solution.
Hope it helps.
I'm trying to implement the new analytics for a Facebook game (using HTML/Javascript and Flash on Canvas, so there is no mobile version), but it seems that the documentation is incomplete. It says that there are 14 predefined events:
"Events are one of 14 predefined events such as 'added to cart' in a
commerce app or 'level achieved' in a game"
Source: https://developers.facebook.com/docs/reference/javascript/FB.AppEvents.LogEvent
"The fourteen pre-defined events are: App Launch, Complete
Registration, Content View, Search, Rating, Tutorial Completed, Add to
Cart, Add to Wishlist, Initiated Checkout, Add Payment Info, Purchase,
Level Achieved, Achievement Unlocked, Spent Credits."
Source: https://developers.facebook.com/docs/app-events/faq
However, on the reference page where all the events should be listed, the list is only 12 items long, and there is no "App launch" event:
https://developers.facebook.com/docs/reference/javascript/FB.AppEvents.LogEvent#events
Now, there are some sample event lists for some games, but they are very basic and they don't include the actual code: https://developers.facebook.com/docs/app-events/best-practices#casual
which recommends to use these events:
App Install
App Launch
Completed Registration
Completed Tutorial
Level Achieved
Achievement Unlocked
(...)
Here is what I have so far:
FB.AppEvents.activateApp()
But is this event the equivalent of App Install or App Launch?
Also, should I send this before the user accepts to share his basic info or after? I'm having so many questions because it's not clear what activateApp() does...
Here is some code for sending some other events that could be useful:
FB.AppEvents.logEvent(FB.AppEvents.EventNames.COMPLETED_REGISTRATION);
FB.AppEvents.logEvent(FB.AppEvents.EventNames.COMPLETED_TUTORIAL);
var params = {};
params[FB.AppEvents.ParameterNames.LEVEL] = '12'; //player level
FB.AppEvents.logEvent(
FB.AppEvents.EventNames.ACHIEVED_LEVEL,
null, // numeric value for this event - in this case, none
params
);
I still have more questions: how can I properly send the game version number (maybe with activateApp?) so I can create segments and cohorts later? Some example codes would be really appreciated!
Thanks in advance!
FB.AppEvents.activateApp() provides install and launch event functionality, which is why those two events are not enumerated as options in https://developers.facebook.com/docs/reference/javascript/FB.AppEvents.LogEvent#events. Activate app doesn't take a parameter. You might want to look at using a custom event to satisfy your use case.