How to keep Google Assistant Behavior but also trigger IFTTT - google-now

I know you can make custom Google Assistant triggers that will invoke IFTTT. But I want to make a custom trigger that will do something but /also/ keep the default Google Assistant behavior. Is there a way to do this?
Description of my actual goal: I speak German as much as possible at home with my daughter. But there are times where I don't know a word, so I can say "OK Google, what is $word in German?" and it will speak it to me. This is very useful.
Then I manually add that word to my vocabulary list to study it.
I would like to write my own Python/Node microservice that will receive the word and generate flashcards (do a lookup on Linguee for sample sentences, for example) in my study program automatically.
But I would also like to keep the Google Assistant behavior that reads the translation back to me on my phone.
So is there a way to accomplish this? Basically instead of having a trigger invoke Google Assistant, I'd like it to do that and also do a second behavior (issue a POST request to a custom URL).
Thank you.

Related

How can I help the Google Assistant recognise my action's name?

I have created an action called Sequematic. The name is a combination of the words sequence and automatic.
Unfortunately when asking Google to 'talk to Sequematic' it has a hard time recognising the name.
Is there anything I can do to help with recognition of the name?
At this moment there is very little that you can do to help Google Assistant with action recognition. The recognition is done by the voice recognition within the Google Assistant device itself and this is managed and trained by Google.
The only trick that I know is by playing around with the name pronunciation in you actions on google settings. This only works if the recognition is changing the name to a close match, something like "Sekuematik" because the recognition does not know how to handle the word. You could pick "Sekuamatik" as your action name and change the pronunciation to "Sequamatic".
While this works, it still changes your actions name, so your action name will be displayed as "Sekuematik" on any visual element in Google Assistant. So this trick might work for some people, but it is far from ideal.
The only other options you would have is wait for Google to update their voice recognition or maybe contact Google Assistant support and request their help to see if they can do something for your action name.

Google Assistent Explicit Intents without App name

I would like to make my Google Assistant (Google Home & Android Smartphone) a little bit smarter by adding simple small-talk intents and (last but not least) usefull "Ok Google, do whatever" or "Ok Google, tell me when ..." intents.
For now I only own an Echo Dot with Alexa and I really hate their conception of skills due to their strict invocations. I have read somewhere that Google is going to come around this nightmare by using implicit invocation. However what I have done so far is not even close to good.
With implicit invocation, Google Assistant can find the correct action by searching for intents. This is good and I can add a simple phrase that Google detects correctly. However, instead of invoking that intent, Google asks me if it should ask appname to do so.
Of course this is not really an option if we want to make digital assistants smarter, since this not only destroys any kind of smartness, but also prevents us (at least me) from writing usefull actions at all (because it would be annoying to develop and to use it). They should be able to react to specific phrases and intents instead of requiring to specify the App. This makes it impossible to create simple intents like "Say goodnight" or "Ask my girlfiend when she will be here".
My question is not only if this is currently possible, but also what we can expect regarding this problem in the future? Is there any good news? Or do we have to wait, until we can help the existing assistents to evolve their real power?
You can add custom trigger phrases that will open or deeplink into your skill.
With query pattern in action.json.
Action.Json Query Pattern (Google Doc)
But the amount is limited. And I am not sure if you can completely avoid that google ask some stupid stuff like should i really open it... or i am opening now...
And maybe you have also to say ok, google to make it start listening at all.
Nick Felker's answer is better than mine. To expand on it a bit:
In the Google Home app on your phone tap the hamburger menu icon (three horizontal parallel lines) in the upper left, then go to "More settings", then "Shortcuts" (near the bottom), then press the little blue "+" button in the lower right to set up your custom shortcut.
Another option for extremely simple intents "Say goodnight" for example, is to use IFTTT, which has lots of integrations out of the box as well as the ability to pass along the message to a webhook which you could write yourself. Important caveat: IFTTT isn't "smart" itself, so that first layer of integration only does simple string matching (and I mean simple; it seems to be case-sensitive).

Make Google Home Action work with "Hey Google, INTENT" instead of "Hey Google, ask ACTION to INTENT" possible?

Right now my Action for Google via Dialogflow only works if I say:
Hey Google, ask ACTION to INTENT
I want to remove the ask ACTION to part, so I can just say:
Hey Google, INTENT
My Action is basically a "Turn on device". I can say things like:
Hey Google, ask home to turn on TV
Hey Google, ask home to turn on fan
and so on. Is this possible? I know for Alexa they're called Home Automation Skills, but they're really tricky to setup, apparently.
There are two (sorta three) answers that address your question in different ways.
First - there is no way, programmatically, to remove the ask ACTION to part. This would be like asking if there was a way to remove the hostname from a URL.
However, you (as a user) can setup a shortcut so that when you say "Hey Google, turn on the TV" this actually gets interpreted as "Hey Google, ask some action name to turn on the TV". To do this
Go into your Google Home app.
Open the Menu -> More Settings -> Shortcuts
Second - as #shortQuestion suggested, you could rely on implicit invocations to do what you want. To pull this off, you need to setup the various phrases that will trigger an explicit invocation - and hope that Google notices these and suggests them as something the user can do. There is no way, however, to force Google to pick your Action for a particular phrase, Google's pick may change over time, and they may just suggest your action instead of immediately invoking it. This is sorta like trying to play the SEO game with Google's search engine.
But... what you're asking to do is something that is more along the lines of a Smart Home action. I wouldn't call it "tricky" to create a Smart Home action, but you cannot do it with Dialogflow, and it requires you to create and setup a server that manages (and ultimately controls) the devices in question.
I just found in the Invocation And Discovery Docs
You able to do that!
Invocation name, ex. Talk to Dr.A
Deep link invocation, ex. find recipes
Discovery (MOST IMPORTANT)
In some cases, some of an intent's query patterns can trigger your action, even if users don't use your invocation name.
This is not programmatically possible on the Google Assistant.
The only way to do this is by setting a shortcut in your Assistant. You could set "INTENT" as a shortcut for "ask ACTION to INTENT".
Go to the "Action discovery and updates" section of the actions on google console, and configure some implicit invocations for the public to discover the functionality within your assistant without explicitly invocating your bot by name.

Why would an invocation name for an AoG app be ignored?

I have an Actions On Google app in testing. Most of the time when I say, "OK Google talk to 'my app name here'" my app runs. Sometimes it does not and Google passes the question to Google Search. Then, on my phone I will get search results in the Google app; on the simulator I will see a message like "blah blah blah not supported in simulation".
I have had the question up since last week on the official Google plus "support" page with only a single reply asking if the screen shots were real or not from a person whom I think is just another developer.
successful invocation
Unsuccessful invocation handled by search
[The screenshots were captured and NOT drawn by the way]
Does anyone here have an idea why search is run and what I can do about it if anything?
This is a hobby project of mine to be sure, but if I were trying to speech enable a device it seems to me that this might be a showstopper and a reason to go with another vendor. No?
Just from those screen shots, my first thought is "how is 'visor' pronounced"? And how could it sound like you're mispronouncing it? If it doesn't recognize the "visor" part to match the pronunciation that you think it should be getting, even if the word displayed is the same, it might be passing it along to search to handle.
Remember - this is English. What is written out isn't necessarily what it sounds like. And the system is trying to match what you say and not what it is written as.
One thing you can do is to listen to the recordings Google has of your invocation attempts. Try and figure out if the successful ones sound different from the ones that failed.

Name input like SMS or Facebook app

I'm trying to find a library that can handle autocompletion with tokened (grouped) texts.
There are some very nice libraries out there for autocompletion such as:
https://github.com/EddyBorja/MLPAutoCompleteTextField
https://github.com/hoteltonight/HTAutocompleteTextField
https://github.com/TarasRoshko/TRAutocompleteView
The problem here is, I want to make the selection look like names tagging at SMS or Facebook app. So when the user tries to delete, he should delete all the text.
There are good jQuery implementations, one of them is this:
http://loopj.com/jquery-tokeninput/
I couldn't find any for iOS, perhaps the keywords are very generic, thus Google does not show any related results. Is there any library for this or can you provide any code examples?
What you're trying to achieve can't be done using the public SDK.
However, there exist some nice third party solutions.
I found this question, Is there an iPhone equivalent to the NSTokenField control?, which includes links to controls you could use.