Google assistant taking over my app's conversation on certain commands - actions-on-google

My app has certain commands that seem to conflict with the Google Assistant's built in behavior. Even though I'm in a conversation with my app and have explicitly asked for a response, the Google Assistant takes over when the user says "Read the note" or "play the tape". In the first case it pops into reading my keep notes. In the second it launches YT music and plays something. I want those commands to be fulfilled by my app!
I've tried training on those specific phrases through the dialogflow console but it didn't seem to help. Is there any way to ensure my app processes all commands while it's in a conversation? Or at least a few specific ones?
I should note, otherwise, commands work perfectly. Even similar ones like "look at the note" work. It's those specific commands causing issues. "Play ____" seems to always launch YT Music though. Commands like "look at the note" go through my TEXT intent and not a fallback intent.

Related

Google Assistant Hello World Draft Project not updating

Very new to Google Actions. Testing out tutorial stuff.
I have tried this in a couple of test projects just to double check. After initial run of any project I do not get any updates on draft projects. No changes show up in draft projects for me for both simulator and real device.
Started new project
Even blank project has basic conversation telling you where to add things next.
Change text.
Notice prompt does not change in testing environment.
In below pictures I have changed the words "hello world" with "Hey Dude" for both fulfillment and console output. I would expect Testing Prompt to respond with "Hey Dude from fulfillment and Hey Dude from the console!" But it does not. Instead it does not reflect any recent changes.
I think there may be two slightly different (but sometimes related) issues going on here.
The first is that there are known problems with the simulator being slow to pick up on updates, or them not seeming to show up. The second has to do with making sure you're deploying changes from the build-in code editor.
I don't have a clear answer to the first problem, although I know they're looking into it. I find that I can make some changes and they may not be noticed, but I know they have been picked up if I see the "Your preview is being updated..." spinner appear. There are other spinners that sometimes appear, but unless it explicitly says that it is being updated - the updates aren't always picked up. (Sometimes they are, however.)
Usually, if I don't see this, I'll go back and force an apparent change (delete a character from a webhook handler name, then add it back) and go back to the simulator. In general, this time it will say it is updating.
If you're using the Cloud Functions editor, you need to do three things:
Save the changes. You'll do this by clicking the "Save Fulfillment" button, but this only saves it so you can leave the editor. It doesn't mean that the simulator has access to it yet.
Deploy the changes. This deploys your code to Cloud Functions so they can be run. Note in the illustration that it says the code is saved, but not yet deployed.
Wait till the changes are fully deployed. Deploying takes time, and until it is completed, it won't be available in the simulator. While deploying, it lets you know.
Once it has deployed, however, the message changes, and you the impacts should be available through the simulator (although you may still need to see the "being updated" message to be sure).
Remember, however, that you don't need to use the "Cloud Functions editor" in order to deploy a webhook. You can deploy a webhook on any web server where
The host is public (so has a public IP address that Google can reach)
It can handle HTTPS with a non-self-signed certificate
You can even deploy yourself to Cloud Functions for Firebase, which is the same service that the Actions Builder uses. This way you have the URL set once in the Actions Builder and, once it is set, you won't need to change it.
But you'll still be able to change your code by managing your own deployment separate from Actions Builder.

Actions on Google launch custom action (not main actions.intent.MAIN)

I have an Actions on Google agent built with DialogFlow with several actions (e.g. actions.intent.MAIN and get_day_of_week).
When I created my agent 3+ months ago, I could invoke the agent in two ways:
With the agent's name (e.g. "Talk to My Agent"), which would launch the actions.intent.MAIN intent.
With the grammar specific to an action (e.g. "Ask My Agent what day of the week is it"), which would launch the get_day_of_week action.
Without changing anything, launching the agent with a custom action (#2 above) stopped working. Is there a way to debug this?
In the simulator, when I type "Ask My agent what day of the week is it", the request and response are empty, and the dialog in the simulator says, "The agent returned an empty TTS". I'm not sure if the request and response are empty because the simulator doesn't support launching custom actions, because Actions on Google stopped supporting launching custom actions, or because my agent broke (even though I didn't change anything). For what it's worth, this same problem happened to two distinct agents that I have.
I'm guessing there's nothing for you to debug; this appears to be a Google bug. I had the exact same thing happen to me on an action we have in production. There's no way I could have changed anything.
Here's my Reddit post, if you wanna follow.
Interestingly, the deep links don't work for me (and several others), but do work for my co-worker. And one of the commenters says deep links don't work for him unless he types it in the console. 🤷‍♂️

Actions on Google - How to access

First time here (please be gentle). I've built and had an action approved. For the life of me, I cannot work out how to run it.
Published
In simulation mode I got a test run to work on my google home. However, my invocation doesn't run it, and I can't seem to find a portal to download it or similar.
invocation
How do you people use it now that it has been approved? I've checked my directory but there is no data.
directory
You will be able to use the same invocation phrases that you used for testing in the simulator.
Although it is "published", it does take time to be distributed to all of Google's servers and for it to be available on everyone's Assistant. There isn't any way you can rush this process - it usually takes a couple of days.
You can look at your Action Console, click on "Analytics" on the left, and then the "Directory" tab on top to see how it appears in the directory.
If you view the directory URL from a mobile device, you can also invoke the Action from the directory entry itself once it is available.
You will never "download" or "install" it on a Google Home. Think of it as a website and your Google Home as a browser.

Interacting with Siri via the command line in macOS

I use Siri on my phone and watch to create reminders on the go. When I'm in the office I don't want to disturb the quiet by using Siri, so I usually use an Alfred workflow that is integrated with the Reminders app, or use the Reminders app directly.
However, both have a rather clunky interface, and it would be much easier if I could just type at the command line:
$ siri "remind me to check stack overflow for responses to my question in 15 minutes"
macOS Sierra has introduced Siri to the desktop, but so far I have been unable to find a way to interact with Siri in any way other than literally talking out loud, and Spotlight does not match Siri with natural language comprehension.
Apple has announced the Siri SDK, but it seems primarily related to adding functionality to Siri, not for exposing the Siri API.
Does Apple expose any kind of API to Siri on macOS such that one could make Siri requests via the command line, system call, or other executable?
Note: I understand that this question could conceivably find a better home at Ask Different, Super User, or Unix & Linux. In the end, I decided that some programmatic integration with an API or SDK was the most probable solution, and thus Stack Overflow seemed the most appropriate place to post. If mods disagree, please do migrate to whichever community is best.
This isnt from the command line, but closer... and I haven't tested it, but in High Sierra there's a way to use Accessibility settings to enable you to use your keyboard to ask Siri questions.
How to enable it:
System Preferences > Accessibility > Siri.
Click in the box beside Enable Type to Siri so that a tick appears.
Now when you trigger Siri, a keyboard will appear into which you can type your query.
Snagged from here: https://www.macworld.co.uk/news/mac-software/how-use-siri-on-mac-3536158/
I was wanting the same feature today - I got it working but could be improved upon: https://youtu.be/VRLGCRrReog
TLDR is use LoopBack by Rogue Amoeba and change Siri’s input Mic to Loopback. Then Use the Say command in Terminal for example.
As mentioned by Brad Parks, you can enable 'Type to Siri' from the Accessibility menu. You can use this to interact with Siri using simulated keypresses.
I've created a simple Python script which behaves like requested in your question when invoked from the command line.
The script uses the keyboard Python module.
#!/usr/bin/python
import sys
import time
import keyboard
def trigger_siri():
keyboard.press('command+space')
time.sleep(0.3)
keyboard.release('command+space')
time.sleep(0.2) # Wait for Siri to load
if __name__=='__main__':
trigger_siri()
keyboard.write(sys.argv[1])
keyboard.send('enter')
Cliclick is a great (and free) tool for triggering mouse and keyboard events via the command line. After installing Cliclick, I enabled "Type to Siri" (System Preferences > Accessibility > Siri). I also changed Siri's keyboard shortcut to "Press Fn (Function) Space" (System Preferences > Siri). The other keyboard shortcut options require you to "Hold" a key, which can be done, but it makes things a bit trickier.
With all that done, I can invoke Siri from the terminal with something like this:
$ cliclick kd:fn kp:space ku:fn w:250 t:"turn on the living room lights" kp:return
Going a step further, if you are familiar with terminal aliases and functions, you can create a "siricli" function:
siricli(){
cliclick kd:fn kp:space ku:fn w:250 t:"$1" kp:return
}
Open a new terminal window after adding that function, and now you can invoke Siri from the command line like this:
siricli "turn on the living room lights"

Does install4j provide a *Completely* unattended auto update?

We are currently evaluating install4j and things are going pretty well, however I have a question about auto-update.
Currently I see options and documentation around 3 options for auto-update and the third one (no version check) seems
to be the closest to what we need. However it sounds as though it still prompts the user to actually start the download/install. Is there
any way to get around this? We are targeting our software as a service on many windows boxes in a server room, so there isn't a user
to click continue for that last step. I believe we can roll our own service to monitor for upgrades that will do a command line
install with an answers file to prevent prompting, but I'd love to know if I missed something that would allow me to utilize
install4j's auto-update.
When you go to Installer->Screens & Actions, click on the "Add" button and choose "Add application", you can choose from a number of pre-defined templates. However, they are just templates and after adding them you can change them completely.
If the updater should be automatic but still show a progress dialog, you can just set the "Default execution mode" property of the updater application to "Unattended mode with progress dialog". In that case, no screens will be shown at all.