Google Assistant taking over the commands given to my action - actions-on-google

There have been triggering problems with my action since a few days ago. The queries that should have been handled by my action were routed to Google Assistant main flow. This happens on both Android Phone, and Google Home.
Steps to repro:
Speak: OK Google, talk to Tinker Doodle.
Assistant: Welcome to Tinker Doodle, what can I do for you?
Speak: Available commands.
Assistant: (Abruptly end Tinker Doodle conversation, list general commands on Assistant.)
I'd expect Assistant to stay in Tinker Doodle conversation, and feed the input to my action.
This makes Tinker Doodle almost unusable. Can you help with this?
I configured the NO_MATCH system intent to call my webhook, since I use my own NLP.
This worked well on Android Phone and Google Home, until a few days ago. There is no problem running in simulator on Action Builder.
Here are the screenshots of the main scene and NO_MATCH intent from Action Builder.

It isn't clear, but this sounds like it may be related to recent announcements that, in some cases, phrases that don't match a specific Intent may cause your Action to close so the Assistant can handle the phrase instead.
Even besides this, handling things with NO_MATCH is generally undesirable, since that will only happen three times in a row before the Action is forcibly closed.
Instead, you should create an Intent that can handle "any" input and route that input to your handler using this method. That involves:
Creating a new Type (I usually call it "Any") that accepts Free form text
Creating an Intent (which I have named "matchAny") that accepts values of this type through its training phrases (or even just one phrase that accepts a value of this type)
In your Scene, add this as an Intent that can be matched, and then set the handler for your webhook when it does.

Rather than using no_match, you can employ the design that the custom-nlu sample uses:
Have a 'Main' scene that tries to match on a user_utterance intent:
Then the user_utterance matches on everything using the any data type:
When you go to the Simulator, any query should match your intent explicitly and then, as part of the sample, it will echo your response:

Related

google assistant default fallbacks exit code

We are having a problem where saying things like "what is the weather" (something Google Assistant recognizes and is not in our intents) exits our action. We solved this with a fallback on the server/fulfillment side when we used Dialogflow, but now we switched over to Actions Builder and the problem is back.
How can we prevent it from closing our action?
This sounds like it may be related to recent announcements that, in some cases, phrases that don't match a specific Intent may cause your Action to close so the Assistant can handle the phrase instead. This will probably happen when the System NO_MATCH Intent gets matched, although Google has been vague on this point.
The workaround that they appear to have suggested is to create an Intent that can handle "free form text" or "any" input and route that input to your handler using this method. This means that one of your Intents will handle it, rather than falling back to NO_MATCH.
This involves:
Creating a new Type (I usually call it "Any") that accepts Free Form Text
Creating an Intent (which I have named "matchAny") that accepts values of this type through its training phrases (or even just one phrase that accepts a value of this type)
In your Scene, add this as an Intent that can be matched, and then set the handler for your webhook when it does.

Intent not fired in device but works fine in simulator?

I am developing action for assistant. I have intent in dialogflow that is fired correctly in simulator but does not get fired in a real device rather fallback intent is triggered ?
What puzzles me more is the intent works fine with one email id but does not work with another email id on the same device ?
I highly doubt the issue arising from language preferences.
It is really difficult to diagnose issues like this with only generic information. To really help, we'd need to see specifics of the Intent and examples where it doesn't get triggered where you expect it to. But a few things to consider:
If the sample phrase uses homonyms like "to/two/too" or "four/for", it can be picked up incorrectly.
You don't indicate if this is spoken or typed where you have the problem. You may want to look at the entries at https://myactivity.google.com/ to see how it hears what you're saying.
Check the Dialogflow "History" section to see if it provides any guidance on what it is getting and why it is choosing the fallback intent.

how to add progress message in google home dialogflow

In my conversation dialogflow, I would like to add some progress messages like hang in with me, I'm looking up for that data or similar in the conversation. Is there any guidance or best practice to do this?
Unfortunately, there is no good way to do this at this time. If your webhook takes longer than about 5 seconds, Dialogflow will return one of the default responses it is set with. If you're not using Dialogflow, the Action SDK will say your webhook isn't responding and will close the conversation.
There is currently no way to send a reply, and then send another reply without the user saying something first.
One workaround might be to have the default response be something like "I'm looking that information up. Ask me again in a few seconds." When your lookup finally completes, cache the information so when/if the user asks the question again, you can return it more quickly.
Depending how long it takes, you may also wish to register a dynamic reprompt. This will send an event to your webhook if the user doesn't say anything. In a situation like this, they may say nothing for a few seconds, but that may be long enough for you to have computed the reply. So after a few seconds of silence you can suddenly announce "I've figured it out, the answer you were looking for is..." or something similar. This has some limitations - you can only reprompt twice like this before Google sends you a final reprompt and closes the conversation.
Although the platform does support notifications, these are still in developer preview and don't work with all devices. They also don't quite continue the conversation (it doesn't just start talking) - they just send a notification to a phone that there is a message and that they can restart the conversation. Depending on your use case, this may be useful combined with the above.
Update
The Media Response includes a feature that we can take advantage of to handle this. Similar to the dynamic reprompt method above, you'll get a call automatically when the media you're playing ends. So you can play a short "hold music" and your webhook will be called when it is finished. You can then either give the result or say you're still working on it and play more hold music.

Handle timeout of GNotifications in Gnome?

My program needs to react to the user not taking any action on a GNotification.
More specificially, a piece of data is written to the database only if the user does not press the "undo" button on the notification sent after the data's creation. My target deployment scenario does have notifications enabled and a real timeout value.
To be precise: Moving the notification "away" / deleting it should also count as such a timeout.
1) Is there a built-in way to 'listen' to notification timeouts?
2) If not, how could I still implement similar behavior?
I would use the D-Bus org.freedesktop.Notifications interface. Although it is still a draft specification, it does appear stable. My experience accessing the D-Bus interface using Vala has been that it is easier to use and gives the full feature set of the specification. GNotification doesn't seem to be as feature complete.
From the draft specification you will see there is an expire_timeout argument of the org.freedesktop.Notifications.Notify method. That should fit your time out requirement, although I've not used it personally. There is also a org.freedesktop.Notifications.NotificationClosed signal that will allow your program to be notified when the notification is closed, including because of a time out or if it was dismissed by the user.
This post about the screen lock re-design for GNOME Shell 3.10 might give some indication of what notifications are capable of. The post includes some screenshots of notifications appearing in the lock screen.

OpenFeint achievements performance

I've decided to integrate OpenFeint into my new game to have achievements and leaderboards.
The game is dynamic and I would like user to be rewarded immediately for some successful results, but as it seems for me, OpenFeint's achievements are a bit sluggish and it shows visual notification only when it receives confirmation from the server.
Is it possible to change something in settings or hack it a little bit to show notification immediately as soon as it checks only local database if the achievement has not been unlocked it?
Not sure if this relates to the Android version of the SDK (which seems even slower), but we couldn't figure out how to make it faster. It was so unacceptably slow that we started developing our own framework that fixes most of open feint's shortcomings and then some. Check out Swarm, it might fit your needs better.
There are several things you can do to more tightly control the timing of these notifications. I'll explain one approach and you can use this as a starting point to explore further on your own. These suggestions apply specifically to iOS apps. One caveat is that these suggestions refer to internal APIs in OFSDK 2.8 for iOS and not ordinarily recommended for high level use and subject to change in future versions.
The first thing I recommend is that you build the sample app with your own product key. Use the standard sample app to experiment before applying the result to your own code.
You are going to get the snappiest response by separating the notification pop-up UI from the process of submitting the achievement. This way you don't have to worry about getting wrapped up in the logic for deciding whether the submission is going just to the local db or is doing the full confirmation on an async network transaction.
See the declaration of "showAchievementNotice" in "OFNotification.h". Performing a search in the sample app, you will see that this is the internal API used for displaying the achievement pop-up when an achievement is earned. It does not actually submit the achievement. You can call this method directly as it is called from "OFAchievementService.mm" to directly control when the message appears. You can then use the following article to disable the pop-up from being called when the actual submission occurs:
http://support.openfeint.com/dev/notification-pop-ups-in-ios/
This gives you complete freedom to call the submission at a later time provided you keep track of the need to do so. For example, you could locally serialize a flag to take care of the actual submission either after the level is done or the next time the app starts up. Don't forget that the user could quit out of a game without cleanly finishing a level.