I am playing around with IBM Watson Assistant (digital chatbot) and I want to implement a delay so that the chatbot waits 1-3 seconds with sending the respons.
Is there any way I can do this? I saw on earlier pots that it was not supported yet.
Mille
I can only suggest calling a cloud function that delays for 3 seconds before returning, in your dialog code. A delay is not supported in the dialog itself.
Related
I'm developing a smart device that needs to respond to a trigger and take an action. I'm having some trouble however determining what will host the code that fires the trigger. Google Home appears to have events based on time but I can't seem to find anything that can trigger an event based on something like the weather. IFTTT seems like a natural fit but to have customers install IFTTT and then find my applet is a bit cumbersome. I could have my server monitor the condition and fire the trigger but ideally the trigger could be generated on-prem.
So my question... Does anyone have a good suggestion for where to host code that fires a trigger that is sent to a smart device?
*first-time poster so forgive me for any lack of formalities
Automations on Google Home are available for triggering actions but might cover all the use cases you specified. You can create your own system that changes the states of the devices based on your conditions, then report to Google via Report State.
I'm working on Google Actions Console.
I want to have my google agent to verbally warn that time is up (instead of setting a timer, for instance).
I have now two main scenes:
user says "I am ready", the agent responds "OK. Ready, set, go!";
(user says nothing and) the agent says "please stop now".
I would like the prompt in 2 to proactively run 5 minutes after the end of the prompt in 1, without the user having to say anything.
Is it possible to create a timer/delay fo 5 min before the transition from 1 to 2 or to have the prompt in 2 delayed of 5 min during scene 2? How can I create this delay? Is there any workaround otherwise?
NB: I'm not a developer so be patient :D
This is difficult to do without code, but not impossible.
First - in general, Actions on Google is poorly suited for this. It is much better for conversational systems rather than timed events.
You have two options for how to do this:
As part of an Interactive Canvas game.
Using a Media response.
As part of an Interactive Canvas game
This scenario has you controlling the timer using JavaScript code that is part of an Interactive Canvas page that you have loaded on a Smart Display or Smart Phone device. As part of the "Ready Set Go" response, you send data back to indicate that your local code should start the timer.
You'll capture this data as part of the onUpdate() callback and in your callback function set the timer. This timer is done using JavaScripts setTimeout() function. In the function that setTimeout() triggers when it is done, you can call the sendTextQuery() function to continue the conversation.
Using a Media response
This will work on devices that can play long-form audio, but do not have a screen (so they can't use the Interactive Canvas).
In this scenario, when you send the "Ready Set Go" response, you also include a Media prompt which plays a 5-minute long audio.
When the audio finishes playing, it will send a MEDIA_STATUS_FINISHED System Intent which you can handle and then reply to continue the conversation.
Which should you use?
Well... maybe both. Media works better on Smart Speakers, while the Interactive Canvas works better on Smart Displays and Smart Phones (assuming your Action is a Game).
all:
We want to enable Google Assistant with custom actions via the button, not the voice input (keyword).
For example, usually, we enable Google Assistant with word "Hello, Google, show me the weather.". But within our production, we want to press one specific button, and then it could send the sentence above out to Google Assistant directly.
But we can't find any APIs to support this requirement. And we heard that Google plan to support hard-key method since Samsung make good experiences on S8
Do anyone help us to fix this gap?
Thank you!
You could use an Action link without any additional parameters specified to trigger the MAIN intent, or specify the custom intent you'd like to trigger.
I have built a chat app using twilio to completion but I have noticed initializing it is pretty slow on an EDGE connection, averaging to 10-15 seconds (Whatsapp and telegram takes about 3 seconds on the same connection).This is without having set any region via properties on the sdk. Am looking to achieving a snappy startup time like that of telegram/whatsapp.
To go around this issue I thought it might be network latency issue and I thought maybe setting different regions might resolving it. So far I have tried setting the regions provided here https://www.twilio.com/docs/api/client/regions but am getting an error message request to EMS service has failed unable to set FPA token and error code 0.
Am in africa and my target audience will mostly be running on edge connections.
Please help resolve.
Thanks.
Twilio developer evangelist here.
Twilio Programmable Chat does not currently take a region option, like Twilio Client v1.4 does. In fact, when initializing a chat the only option you can set right now is the logLevel.
If you are interested in multi-region chat services, I suggest you get in touch with Twilio support to register that this is an issue for you.
I’ve already built an Alexa skill, and now I want to make that available on Google Home. Do I have to start from scratch or can I reuse its code for Actions on Google?
Google Assistant works similar to Amazon Alexa, although there are a few differences.
For example, you don't create your language model inside the "Actions on Google" console. Most Google Action developers use DialogFlow (formerly API.AI), which is owned by Google and offers a deep integration. DialogFlow offered an import feature for Alexa Interaction models, which doesn't work anymore. Instead, you can take a look at this tutorial: Turn an Alexa Interaction Model into a Dialogflow Agent.
Although most of the work for developing voice apps is parsing JSON requests and returning JSON responses, the Actions on Google SDK works different compared to the Alexa SDK for Node.js.
To help people build cross-platform voice apps with only one code base, we developed Jovo, an open-source framework that is a little close to the Alexa SDK compare to Google Assistant. So if you consider porting your code over, take a look, I'm happy to help! You can find the repository here: https://github.com/jovotech/jovo-framework-nodejs
It is possible to manually convert your Alexa skill to work as an Assistant Action. Both a skill and an action have similar life cycles that involve accepting incoming HTTP requests and then responding with JSON payloads. The skill’s utterances and intents can be converted to an Action Package if you use the Actions SDK or can be configured in the API.ai web GUI. The skill’s handler function can be modified to use the Actions incoming JSON request format and create the expected Actions JSON response format. You should be able to reuse most of your skill’s logic.
This can be done but it will require some work and you will not have to rewrite all of your code.
Check out this video on developing a Google Home Action using API.AI (that is recommended).
Once you have done the basics and started understanding how Google Home Actions differ from Amazon Alexa Skills, you can simply transfer your logic over to be similar. The idea of intents are very similar but they have different intricacies that you must learn.
When you execute an intent it seems as if your app logic will be similar in most cases. It is just the setup, deploying and running that are different.