I created a skill which plays small audio using Audio tag in SSML tag and after that audio I am asking few questions to the user, whether he or she wants to hear something else, The answer should be in YES or No and I uses inbuilt YesIntent and NoIntent to process the user answer and storing question state in session attributes. I created a c# function to process the request and put it on azure function. I enabled this skill for beta testing and shared with few testers. Alexa plays audio and ask questions successfully and proceed as per my workflow if there is an only single request or single tester test it. If two or more tester tests the same skill at the same time, Alexa session id replaced with the latest request session id. For example Tester T1 request Alexa skill, then Alexa launches my skill and gave ABC123Tester1 as SessionID and start playing audio that I set, after 2 or 3 seconds Tester T2 start same skill, then Alexa launches my skill and gave 123ABCTester2 as SessionID, after request from Tester2, the sessionID of Tester1 is replaced with Tester2 SessionID means SessionID for Tester1 is Set as 123ABCTester2 and after this, flow of questions is messed up, like if 1 question asked to Tester1, then it directly ask 2nd question to Tester2 and so on. If Tester2 provide an answer to Alexa before Tester2, then Alexa asks the 3rd question to Tester2 and ask the 4th question to Tester1. This happens because SessionID replace. Is there a way to fix this issue.
Any Help will be appreciated.
Note that the starting audio length is about 15 seconds.
Thanks
Javed
So there is no issue with Alexa Session. Alexa maintains session per user, and I was trying to access skill with same user id on two different system, which raises an issue. I tried with two different user id and access the same skill at the same time and issue gone. There is no session issue, and skill executed as per my workflow.
Related
I'm trying to create a Facebook chatbot that only responds initially to each unique inquiry. I have about 25 intents with unique responses triggered by unique training phases. These are working fine.
However, I ran into the issue where the customer would continue chatting and the chatbot would continue responding. Basically, every time the customer sent something, the chatbot would respond, resulting in an infinite loop which frustrated the customer.
I saw a video that said creating a "follow-up intent" under the intent and setting the lifespan of the output context to 2 would solve the problem. I tested it with my default welcome message and it worked. For example, if the customer said "Hi," chatbot would respond "Hello." If again, the customer said hi in that same conversation, the chatbot would not respond.
HOWEVER. When I applied the follow-up intents to the other 24 intents, they did not work. Only the default welcome intent was working correctly. I set all the lifespans to 2, made sure to save, and even refreshed the page. In the training agent, I noticed the action for the working welcome message was "Fallback.Fallback-fallback" while the action for the non-working 24 other intents was "Not available."
Does anyone have a solution to this? Or an alternative solution to the issue of looping conversations?
Thank you.
I see there is a limit that a user has to respond by before the conversation ends:
"Your response must occur within about 5 seconds or the Assistant assumes your fulfillment has timed out and ends your conversation."
How long does it take for the app to time out and exit the conversation
But is there a maximum that a user can respond for (type voice)? We want to allow for longer responses (and then access the response text).
Ideally we would like an unlimited response time and the ability to access the raw input (type voice) when received
It would be excellent if we could access the audio from the user's response, but as I understand that is not possible.
As I explained here, you can't have access to the raw audio recordings of interactions with your actions as of now. You only get access to the transcription of the user's utterance.
The quote you've supplied:
"Your response must occur within about 5 seconds or the Assistant assumes your fulfillment has timed out and ends your conversation."
isn't about the user's response. Your webhook fulfillment must complete in 5 seconds, otherwise, your system persona will time out.
As far as the length of users prompt goes; if the user doesn't say anything it will trigger a no-input prompt (on smart speakers) or just close the mic (on smartphones) in around 8 seconds. (I don't know if there's an official resource that proves 8 seconds, this is just my experience)
But once the user starts speaking, it will keep listening until the user stops talking. So you can theoretically have a long prompt from the user. However, I wouldn't recommend this since it would be a terrible user experience if you look at it from a conversation design stand point.
We are working on project using Google Home.
Details:
We have built certain intents in Dialog Flow. It has certain follow-up questions to get the parameter values, as a multi-turn dialogue. When testing using Dialog Flow, test console, I am asking
Can you help in booking a table: It prompts back with right question (Where do you want to book a table) as configured in Dialog Flow
Where do you want to book a table : I answer - "Some Restaurant". It prompts back with right question (When do you want to book a table) as configured in Dialog Flow
When do you want to book a table: I answer: "Today" . It prompts back with right question (For how many guests) as configured in Dialog Flow.
For how many guests? - I answer: "4 people." It ends the conversation, as configured in Dialog Flow.
The above conversation works perfectly fine as expected.
When I test using the integration for Google Home (using simulator with action SDK) [See how it works in Google Assistant]
Invoke the app (by using the explicit invocation - Talk to [APP NAME]) - App gets invoked with the right greeting message as configured
After that when I ask the questions as mentioned - above - app leaves the conversation? Nothing is answered back.
Not sure why this issue is happening - anything I am missing in the configuration?
Walk through your intents and make sure the 'set this intent to end conversation' is not set to enabled in Dialogflow (and if you're using a webhook not ending there). Look down at the Responses section in DF.
Start with Welcome Default intent, and then check each intent, all follow-up prompts.
For personal gmail accounts, the Web&App activity, when turned on, it automatically gets enabled.
For gsuite accounts, even when the Web&App activity turned on, it needs to be enabled by the admin of that organization. Only after when it is enabled, the simulator will behave as expected.
I think Actions straight up doesn't work for some (all?) Gsuite accounts, regardless of what permissions you set. Google knows but doesn't care. I spent weeks in an Actions support conversation on this topic and they ultimately punted me to the Gsuite team, who couldn't help. See also:
Sorry, this action is not available in simulation
Actions on Google won't respond to explicit invocations
I created an app on google home, and I want the app to work only if its MY voice who that asks to do the skills on my google home
How can i do that? Is this possible? I configured my google home at beginning for it recognize my voice, now how to make it mandatory for this app?
I'm trying to make my app secure because it's make banking operations by voice.
Each user request is sent to your application with a unique anonymous UserID. You will need to determine the UserID that belongs to your account (by looking at logs for your application to see which value is yours) and reject requests from other UserIDs.
Even better would be to setup a more proper Account Linking system.
Keep in mind, however, that the voice authentication system isn't perfect and there is a slight, but possible, way for others to duplicate the request - either by using a recording of your voice or by having a similar voice. Consider all the risks when designing such applications.
I'd like to have a game in which everytime one of your friends beats your highscore, it sends you a notification.
There is many ways to do it. But I would like to know if there is a way to "ask" your iphone to call one of your applications or services which then contains the code that checks the leaderboards and push a notification if needed.
Not sure if this is clear enough ;p
Thank you for your help
If you want to try something like this..then you must do this from within the app. .that is system won't tell your app to check codes.. you open the app and then press a button or something to check leader boards.
So there is no way to ask your iPhone!
best way is to have an option in within the app through which user confirms that he wants to receive notifications on high score beaten..then you keep his high score on your server..and if someone breaks that.. post a notification from your server.
Your highest score must be stored somewhere. If you store it on your server, then you can use web services so that every time the user got a highest mark(on his/her iPhone), send a web service request to the server and get the results back.
On the server side, all you have to care is creating a user's friend list and the highest score of every player.
Hope this helps.