User interaction with IFTTT - ifttt

I'm a newbie when it comes to IFTTT...
The following I accomplished: "When arriving at a certain location turn on a Smart Life device X "
Now I want to refine this to "When arriving at a certain location show a dialog to the user asking to turn on a Smart Life device yes/no" . If user answers Yes then turn it on.
The IFTTT app can eg show a dialog like : you entered this region, but I don't find a way to let the user choose an option and act on that.
Anyone has an idea on how to accomplish this
Big TIA

It seems not possible at this time with IFTTT alone...
I found a workaround with the Shortcuts on iOS:
I run the geofencing in Shortcuts app
show an alert
if the user answers yes I trigger an IFTTT Webhook performing the task by opening the Webhook URL from Shortcuts.

Related

Google Actions rejected in review due to "Mic is open" issue

I'm having trouble submitting my Google Actions skill (this skill is Dialogflow based).
This is the message after submitting my skill.
Your Action leaves the mic open for a user command without a prompt, such as a greeting or an implicit or explicit question.
Note: Thank you for submitting your Assistant Action for review.
However, your Action has been rejected for the following:
1. Your Action violates our User Experience policy. Specifically, your Action listens for a user command by leaving the mic open without a prompt such as a greeting or an implicit or explicit question.
After the Action responds to "メニュー画面”, the mic remains open without a prompt back to the user.
For example,
User:メニュー画面
Action: *Mic is open* At this point, either prompt the user with further options or close the Action.
After reading the feedback, I understand that the phrase メニュー画面 causes the microphone issue.
I've tried to enter this phrase メニュー画面 in my Google Assistant application on iOS and my phone just opens the Setting menu without giving a defined response as I configured in Dialogflow intent.
I've tried to enter this phrase on Google actions web console in Phone mode and nothing happen and the request is empty
Empty response on Google actions
My expectation after entering this phrase: Google actions won't detect any intent that matches this phrase and the DefaultFallback intent will be triggered and Google will reply with some defined responses.
What can I do to resolve this issue? Thank you.
A cursory translation of "メニュー画面" seems to be "Menu screen".
When you're using Dialogflow, the "Default Fallback Intent" will catch any user utterance that does not match another intent. You should make sure this fallback tries to rescue the conversation by reprompting them or ending the conversation.
This means that in practice asking them a question or closing it.
Because once the Action is given this prompt, the surface will open the microphone and listen for the next user utterance. However, if the response is not a question, the user may not know to say something.
Reprompting the user can be a matter of good design or just as simple as adding something like "What do you want to do?" at the end of each response.
Alternatively, you can just mark the conversation as over. This will not reopen the microphone after its response.

Google Action leaves the mic open for a user command without a prompt

I have created Google Assistant Conversation Action that sends user questions via webhook to IBM Watson. When I sent it for review, Google denied it for the following reason:
Your Action leaves the mic open for a user command without a prompt, such as a greeting or an implicit or explicit question.
Example of a user conversation:
.
Google Action review team says the following:
I am happy to hear that 🙂 (mic opens) - At this point, either prompt the user with further options or close the Action.
Do I need to send explicit or implicit questions every time, or would it be OK if I just sent a response with suggestions (quick replies) that can guide the user without asking them for something?
For example:
Action: I am happy to hear that 🙂 (mic opens)
Suggestions: "Tell me about your company"
Example of response with suggestion:
Consider a Google Home, a speaker where you cannot show suggestions. When your Action responds with "I'm glad to hear that." there is no further indication that it is listening or expecting a response. To the person using it, they don't know to say anything, and no reason to believe that the microphone is open.
In order to provide a better conversational experience, you should append a question to the end of each response. It can be something simple, like "What else do you want to talk about?" or you can make it more dynamic depending on the context.
Either way, you will need to provide auditory and visual clues that your Action wants to continue the conversation.

DialogFlow google Home Assistant keeps listening, does not pause

I have created a chat bot which responds to request.This is the flow currently happening:
I say "Talk to My Test App"
My app starts and says welcome message.
I request something and my intent is fulfilled
After this the Google Home does not pause but keeps on listening.
If I stop it then again I will have to say "Talk to My Test App", which I also don't want.
I want google home to sleep after fulfilment.
and Awake in the same app when I say "Ok Google"
More details:-
In my use case the user will talk to the app frequently, for example after every 30 seconds-2mins. I don't want him to say every time "Hey Google" to wake up and then "Talk to My App" and then the command. I also don't want to say long sentence after waking up the Google Home like "Talk to My App to Do this".So I thought it would be better that my app doesn't stop by ending the conversation, instead it should be paused.So that the user can just wake up Google Home and directly pass the command.
Currently Google Home does not pause after the first command and keeps on listening surrounding sounds and responds to the noise, because of this issue user has to stop it.
I needed to have a pause so I can narrate my thoughts but not exit my conversation for a client demo, so I added this in DialogFlow's text response, with a very long break at the end of each text response. I can then interrupt the pause with "Okay Google" and stay within my conversion.
<speak>This is a sentence with a <break time="600s"/> pause</speak>
As the name suggests, a conversational VUI suggests that you're going to have a conversation with the agent. Not something with long pauses in between. The assumption is that if there is no reply, that the user isn't actively engaging in the conversation. There is no direct feature that does what you want, although there are a couple of interesting workarounds that might work for you.
First, as you suggest, deep linking with a phrase such as "Hey Google, ask My App to Do This" is certainly a possible approach, and one that you should support. In production, and as a user uses it more, the introduction and hand-off from Google gets shorter and shorter. Even the launch phrase can be shortened with the user creating a Shortcut - but this is a choice of the user, not you.
Although there is no way to "pause" a conversation, there is a way to have the reply include streaming audio that the user can interrupt. Using a Media Response begins playing that media.
When the URL pointed to by the media ends, your Action gets a callback (via an Event in Dialogflow or an Intent with actions.json) indicating the media has ended, and you can do something like play another Media Response, and continue to do this as long as appropriate.
Are any time, your user can interrupt the audio by saying "Hey Google" and some command. This will trigger any matching Intent, as if they said a command as usual.
There are some caveats with this scheme - some commands don't actually work (anything with "next" in it, for example, since that sounds more like a media command that isn't implemented), and you need an audio of reasonable length that won't be distracting in your environment, but this might be a reasonable solution to your scenario.
If you want to exit the conversation you may do the following:
Go to the dialogflow console.
Create and intent (say goodbye).
In the Event section, enter 'actions_intent_CANCEL' in the add event field.
Enter your Training Phrases. (Say exit, stop, pause etc)
Enter your greetings (say goodbye) in text response. You may skip this if you don't want to reply.
Enable 'Set this intent as end of conversation'.
Save

Open Mic issue for DialogFlow app

My submission of Dialogflow app got denied due to open mic issue.
ERROR : "During our testing, we found that your app would sometimes leave the mic open for the user without any prompt. Make sure that your app always says something before leaving the mic open for the user, so that the user knows what they can say. This is particularly important when your app is first triggered.
Current implementation :
User asks something and app replies back with static text and a static basic card content. The Google Assistant bot reads out the text and then mic open momentarily for user voice input.
NOTE THAT :
THERE IS NO FULFILLMENT REQUEST.
I DO NOT WANT TO END CONVERSATION HERE
ALL ARE STATIC RESPONSES
HOW DO I SOLVE THIS ?
The important part in the rejection is that you left the mic open without any prompt.
This usually means that your action has said something like "The answer is four" without giving any idea what the user should do now, or that it is the user's turn to speak.
A reply such as "The answer is four. What would you like to do now?" should meet the requirements. The point is to prompt the user to be aware the conversation isn't over.

Is it possible to send rich responses to google home app?

I developed a actions on google app which sends a rich response. Everything works fine in the Actions on Google simulator. Now I want to test it on my Google Home Mini but my rich responses are not told by the mini. I would like to ask if it is possible to send my rich response to the google home app? The home mini says something like "Ok, I found these hotels, look at the home app" and there are the rich responses?
You can't send users to the Home app, but you can direct them to the Assistant available through their phone. The process is roughly:
At some point in the conversation (decide what is best for you, but when you have results that require display is usually good, or if the user says something like "Show me" or "Send this to my phone"), determine if they are on a device with a screen or not. You do this by using the app.getSurfaceCapabilities() method or by looking at the JSON in the originalRequest.data.surface.capabilities property. If they're using a screen, you're all set. But if not...
Make sure they have a screen they can use. You'll do this by checking out the results from app.getAvailableSurfaces() or looking at the JSON in the (not fully documented) originalRequest.data.availableSurfaces array. If they don't have a screen, you'll need to figure out your best course of action. But if they do have a screen surface (such as their phone, currently) available...
You can request to transfer them to the new surface using the app.askForNewSurface() method, passing a message explaining why you want to do the switch, a message that will appear as a notification on the device, and what surface you need (the screen).
If the user approves, they'll get the notification on their mobile device (using that device's normal notification system). When they select the notification, the Assistant will open up and will send your Action an Event called actions_intent_NEW_SURFACE. You'll need to create an Intent that handles this Event and forwards it to your webhook.
Your webhook should confirm that it is on a useful surface, and then proceed with the conversation and send the results.
You can see more about handling different surfaces at https://developers.google.com/actions/assistant/surface-capabilities
Rich responses can appear on screen-only or audio and screen experiences.
They can contain the following components:
One or two simple responses (chat bubbles)
An optional basic card
Optional suggestion chips
An optional link-out chip
An option interface (list or carousel)
So you need to make sure that the text response is containing all the details for cases like voice only (e.g. Google home/mini/max).
However, if your users are using the assistant from a device with a screen, you can offer them a better experience with the rich responses (e.g. suggestion chips, links etc').