Audio doesn't play in SSML if action is activated by keyboard - actions-on-google

I am trying to play an audio file in the SSML using a code like this :
conv.ask(`<speak>playing sound
<break time="300ms"/>
<audio src="sound.mp3"/>
</speak>`);
it works fine on the iPhone using google assistant if the action is invoked by voice and it plays back the audio, however if I activate the action by typing the action name on keyboard(talk to .....), only the display text is shown and no audio is played back. I am doing something wrong? is there any way to fix this?

it turned out that is the intended behaviour of AoG. If the action is invoked by keyboard, only the text part of the response is displayed and the "audio" tag inside the SSML is ignored. I had to change my response to include a MediaObject and it is now playing the audio regardless of if it is invoked by voice or keyboard.

Related

jPlayer doesn't play audio file automatically in iPad

I am trying to play audio file using jPlayer in iPad. It works fine in Safari on my local PC but when I tried to open it in my iPad it doesn't play audio automatically.
Please help me.
Thanks
You may have to initialize jPlayer on user action in Mobile Safari.
The first time a media element is played must be initiated by a user gester. ie., The user must click the play button. This affects the operation of a jPlayer("play") in the ready event handler. The browser will ignore the command. jPlayer will simply wait until the user presses the play button.
Once the first gesture has been received, JavaScript code is then allowed to do whatever you want with the media element. Note that a jPlayer media player instance uses a audio and a video element. Each require their own gesture.
As a hack, you could possibly add a handler for a touch event on an outer container element (or body) and initialize any jPlayer instances that you want.
Ref: http://www.jplayer.org/latest/developer-guide/#jPlayer-known-issues-event-driven
Do you mean it doesn't play automatically? Because if it does, I don't see what your problem is.
Check if you have entered the correct path for option swfPath.

UIWebView intercept "setting movie path:" from javascript audio player

In iOS 4, I've got a page loaded in a UIWebView with a javascript audio player. I did not create this player, it is owned by a third party, so I can't tinker with it. When I click the play button, I see an NSLog printout like the following:
setting movie path: http://data.myaudio.com/thefile.mp3
My question is, what is getting it's movie path set and how do I intercept it? The audio will continue to play until I create another UIWebView, or use the built in audio controls accessible by an iPhone home button double tap, or close the app. I can't intercept the path with shouldStartLoadWithRequest:, the javascript function audio.play() appears to call some built in player directly. I'd like to control where and how the audio is being played, but short of parsing the HTML for any <audio> tags, I can't figure out how to grab that path and point it somewhere other than the default.
UIWebView is essentially a wrapper for WebKit. Apple does not want you to touch anything more about it than is provided by the existing delegate methods.
That being said, you can modify the DOM of any loaded document by injecting JavaScript. This way you could also modify the audio.play to not do anything and instead get the URL to play with your own player that you can control.

How do you handle pause and resume of speech when using the Flite libraries?

i have developed sample application in which i have used Flite libraries for Text to Speech conversion. But now i am not able to get how can i pause and resume speech using API/Classes of Flite, because i think Flite convert our text string as a wav file and then once recording completed then it plays that sound file in background. So when i press button "Pause" how can i know how much of text will be converted into audio output, so that i can start with remaining texts when pressing a resume.
In the FliteTTS.m file you can uncomment the preload audio player and comment out the AV player "play". Then make a new method that calls AV player to play. That way you will have your whole string converted into a file and you can play and pause it at will.
If you are asking how you know which word it will be on when your user hits "pause" - that is tricky - since it is all one audio file you can't.
Maybe you could send flite TTS each word separately, covert them and play them back in series keping track along the way - I have no idea if that will sound very good.
Or use speech recognition to listen to the AV player and guess. Or maybe analyze the string length and time the audioplayer and guess.

Is there anyway to play the "sent mail" sound clip that the mail client uses?

I was wondering if there was a way you could play the "sent mail" sound clip in your app without using MFMailComposeViewController. I have a custom view controller for sending messages in my app. is there a way to play the sound when the user sends the message?
There is no function or method for that, but what you can do is find that sound in the Frameworks and play it as other sound you would have.
For example here you can find the path of the keyboard click sound:
Increase keyboard click sound back to normal?
You will have to look for the mail sound probably in the same UIKit.Framework or the frameworks that looks like the one who has the mail stuff.

Is there an easy way to stream a m3u in iPhone?

I can have a UIWebView with the .m3u file opened, which will go to the webview with a play button displayed, and that automatically goes to the quicktime player and starts playing the stream. But when I press the done button, it goes back to the UIWebView with a little play button in the middle, and from there you can go back to the previous screen (it was selected from a tableview). So I just want it to automatically load the quicktime player in the view. How can I do that?
An m3u file is nothing more than a text file listing MP3 (and / or other format) digital audio files to be interpreted by player software as a series of audio files to be played in succession. So my best guess (I am about to implement this myself, so I'll find out soon if it actually works that way) at going about this is:
read in m3u file
parse for stream URLs and store these
choose / let user decide which one to play
Implement streaming player like explained here.
There is no step 5. I hope.