How can VST audio plugin detect stream interrupts from VST host? - plugins

I have developed a simple VST plugin. The plugin has an internal buffer with audio samples which should be cleared if audio stream gets interrupted.
Now if I use this plugin in some media player (like Foobar with VST wrapper plugin) and I use the seek bar to skip to some position in the song or I switch to a new song, I still hear tail of previous audio.
Is there any VST callback or something that gets called to notify plugin about such stream interrupts?

There isn't exactly a notification hook, but it is pretty easy to see if playback has started or stopped. The host should call your plugin's suspend() and resume() when transport stops and starts, respectively. In those calls, you can then ask the plugin for its playback state by calling getTimeInfo() (which is declared in audioeffectx.h). You can pass the kVstTransportChanged and kVstTransportPlaying flags to have your plugin react accordingly to transport changes.
However, some hosts might be naughty and not suspend/resume when simply changing the playback position but not transport state. I'm not sure how CPU costly it is to query the time info during process, but you can try doing so there in order to see if the host is jumping around in the arrangement.

Related

Flutter lower background volume while TTS plays

I have Text to Speech app and I'm wondering if there is a way to allow the user to listen to the TTS audio while they are listening to their music i.e. on spotify or an audio player.
At the moment the TTS plays over the top of spotify by default. Spotify doesn't stop when the tts starts, which is good, but, it is too loud.
Does anyone know if it's possible to lower the volume of spotify or other music that is playing when the user presses play on the TTS?
This requires the usage of AudioManager and AudioFocus. Unfortunately there isn't any package published for flutter to communicate with the platform channel to control volume or request Audio focus and alternate volume programmatically, yet.
developer.android.com/guide/topics/media-apps/volume-and-earphones
> Controlling stream volume programmatically
In rare cases, you can set the volume of an audio stream
programmatically. For example, when your app replaces an existing UI.
This is not recommended because the Android AudioManager mixes all
audio streams of the same type together. These methods change the
volume of every app that uses the stream. Avoid using them:
adjustStreamVolume()
adjustSuggestedStreamVolume()
adjustVolume()
setStreamVolume() setStreamVolume()
setStreamSolo()
setStreamMute()
About Audio Focus -
developer.android.com/guide/topics/media-apps/audio-focus
Managing audio focus
Two or more Android apps can play audio to the same output stream
simultaneously. The system mixes everything together. While this is
technically impressive, it can be very aggravating to a user. To avoid
every music app playing at the same time, Android introduces the idea
of audio focus. Only one app can hold audio focus at a time.
When your app needs to output audio, it should request audio focus.
When it has focus, it can play sound. However, after you acquire audio
focus you may not be able to keep it until you’re done playing.
Another app can request focus, which preempts your hold on audio
focus. If that happens your app should pause playing or lower its
volume to let users hear the new audio source more easily.
Audio focus is cooperative. Apps are encouraged to comply with the
audio focus guidelines, but the system does not enforce the rules. If
an app wants to continue to play loudly even after losing audio focus,
nothing can prevent that. This is a bad experience and there's a good
chance that users will uninstall an app that misbehaves in this way.
#cmd_prompter is right in that you're looking for the AudioManager on Android. (On iOS, the equivalent is the AVAudioSession.)
However, there is a Flutter package available for this use case now — the audio_session package.
The package allows you to determine how your app deals with background audio. You can ask other apps to "duck" their audio (= temporarily lower their volume) or to pause playback altogether.

Google Assistant for voice-input game

I'd like to develop a game/skill on Google Assistant that requires the following, once the user has entered the game/session (“hey Google, start game123”)
playing an audio file that is a few minutes long
playing a second audio file while the first clip is still playing
always listening. While the files are playing, the game needs to listen and respond for specific voice phrases without the “Hey Google” keyword.
Are these capabilities supported? Thanks in advance.
"Maybe." A lot of it depends what devices on the Actions on Google platform you're looking to support and how necessary some of the requirements are. Depending on your needs, you may be able to play some tricks.
Playing an audio file that is "a few minutes" long.
You can play audio using SSML that is up to 120 seconds long. But that will be played before the microphone is opened to accept a response.
For longer files, you can use a Media Response. This has the interesting feature that when the audio finishes, an event will be sent to your server, so you have some limited way to handle timed responses and looping. On the downside - users have to say "Hey Google" to interrupt it. (And there are currently some bugs when using it.)
Since you're doing a game, you can take advantage of the Interactive Canvas. This will let you use things such as the HTML <audio> tag and the Web Audio API. The big downside is that this is only available on Smart Displays and Android devices - you can't use it on Smart Speakers.
Playing multiple audio tracks
Google has an extension to SSML that allows parallel audio tracks for multiple spoken and audio output. But you can't layer these on top of a Media Response.
If you're using the Web Audio API with the Interactive Canvas, I believe it supports multiple simultaneous inputs.
Can I leave the microphone open so they don't have to say "Hey Google" every time.
Probably not, but this may not be a good idea in some cases, anyway.
For Smart Speakers, you can't do this. People are used to something conversational, so they're waiting for the silence to know when they should be saying something. If you are constantly providing audio, they don't necessarily know when it is their "turn".
With the Interactive Canvas devices, we have a display that we can work with that cues them. And we can keep the microphone open during this time... at least to a point. The downside is that we don't know when the microphone is open and closed, so we can't duck the audio during this time. (At least not yet.)
Can I do what I want?
You're the only judge of that. It sounds like the Interactive Canvas might work well for your needs - but won't work everywhere. In some cases, you might be able to determine the capabilities of the device the user is playing with and present slightly different games depending on the features you have. Google does this, for example, with their "Lucky Trivia" game.

How to continuously stream audio while IceCast server changes streams

Problem:
Streaming live audio via an Icecast mountpoint. On the server side, when the live show stops, the server reverts to playing a music playlist (the actual mountpoint stays /live). However, when the live stream stops, the audio player stops too. Dev tools says the request has been cancelled. Player must be in HTML5, so no Flash.
Mountpoint: http://198.154.112.233:8716/
Stream: http://198.154.112.233:8716/live
I've tried:
Listening for the stream to end, and tell the player to reconnect. However, all of the events on the jPlayer and Mediaelement.js APIs don't return anything when the stream is interrupted.
Busy contacting the server host to ask for advice when dealing with their behind-the-scenes playlist switcher.
I'd like to find a client-side solution to this. Could websockets / webrtc solve this problem by keeping a connection open?
Your problem isn't client-side, but how you are handling your encoding. No changes client-side can appropriately fix this problem.
The stream configuration you are using is that the encoder is using files on disk as a backup stream. Unfortunately, it sounds like instead of re-encoding and splicing (and matching sample-rate and channels if needed), it is just sending the raw file data.
This works some of the time, as MPEG decoders are often tolerant of corrupt streams, and will re-sync. However, sometimes the stream is too broken, and the decoder gives up. The decoder will also often stop if there is a change in sample rate or channel count. (Bitrate changes are generally not a large problem.)
To fix your problem, you must contact your host.
Yes this is unfortunately a problem if the playlist and live stream are not the same codec. Additional tools such as Liquidsoap have solved the problem for me, as well as providing many more features:
savonet.sourceforge.net

Streaming audio from a microphone on a Mac to an iPhone

I'm working on a personal project where the iPhone connects to a server-type application running on a Mac. The iPhone send and receives textual/ASCII data via standard sockets. I now need to stream the microphone from the Mac to the iPhone. I've done some work with AudioServices before but wanted to check my thoughts here before getting too deep.
I'm thinking I can:
1. Create an Audio Queue in the standard Cocoa application on the Mac.
2. In my Audio Queue Callback function, rather than writing it to a file, write it to another socket I open for audio streaming.
3. On the iPhone, receive the raw sampled/encoded audio data from the TCP stream and dump it into an Audio Queue Player which outputs to headphone/speaker.
I know this is no small task and I've greatly simplified what I need to do but could it be as easy as that?
Thanks for any help you can provide,
Stateful
This looks broadly sensible, but you'll almost certainly need to do a few more things:
Buffering. On the "recording" end, you probably don't want to block the audio queue if the buffer is full. On the "playback" end, I don't think you can just pass buffers into the queue (IIRC you'll need to buffer it until you get a callback).
Concurrency. I'm pretty sure AQ callbacks happen on their own thread, so you'll need some sort of locking/barriers around your buffer accesses.
Buffer pools, if memory allocation ends up being a big overhead.
Compression. AQ might be able to give you "IMA4" frames (IMA ADPCM 4:1, or so); I'm not sure if it does hardware MP3 decompression on the iPhone.
Packetization, if e.g. you need to interleave voice chat with text chat.
EDIT: Playback sync (or whatever you're supposed to call it). You need to be able to handle different effective audio clock rates, whether it's due to a change in latency or something else. Skype does it by changing playback speed (with pitch-correction).
EDIT: Packet loss. You might be able to get away with using TCP over a short link, but that depends a lot on the quality of your wireless network. UDP is a minor pain to get right (especially if you have to detect an MTU hole).
Depending on your data rates, it might be worthwhile going for the lower-level (BSD) socket API and potentially even using readv()/writev().
If all you want is an "online radio" service and you don't care about the protocol used, it might be easier to use AVPlayer/MPMoviePlayer to play audio from a URL instead. This involves implementing a server which speaks Apple's HTTP streaming protocol; I believe Apple has some sample code that does this.

How can you play music from the iPod app while still receiving remote control events in your app?

Ok, I'm trying to let a user choose songs from their iPod library to listen to, but I still want to receive remote control notifications (headphones, lock screen osd, etc.) in my app so I can do some extra things. So far I can get either iPod music playing, or headphone events, but not both simultaneously.
Here's what I know so far...
If you use the MPMusicPlayer, you can easily have programmatic access to the entire music library. However, it, not your app, receives the remote notifications regardless if you use applicationMusicPlayer or ipodMusicPlayer.
If you use AVAudioPlayer (Apple's recommended player for most sounds in your app), you can easily get remote notifications, but it doesn't natively have access to the iPod library.
AVAudioPlayer can be initialized with an asset URL, and tracks in the iPod library (type MPMediaItem) do have a URL property that returns a NSURL instance which the documentation says its explicitly for use with AVAsset objects, but when you try initializing the AVAudioPlayer with that NSURL, it fails. (I used the 'now playing' track in iPod which was a MP3 and it did return a valid NSURL object but initialization failed. Even worse, when it was an Audible.com file, the NSURL property flat-out returned nil.)
If you try using an instance of the AVAudioPlayer to get remote events (say, with a blank sound file), then simultaneously use the MPMusicPlayer class to play iPod music, you have remote control access until you actually start iPod playback at which time you lose it since your audio session gets deactivated and the system audio session becomes active.
If you try the same as #4 but you instead set the audio session's category to a mixable variant, your session doesn't get deactivated, but you still lose remote control capability once the iPod starts playing.
In short, whenever MPMusicPlayer is playing, I can't seem to get remote events, and I don't know of any other way to play content from the iPod's library other than by using MPMusicPlayer.
ANY suggestions on how to get around this would be welcome. Creative or flat-out crazy. Don't care so long as it works.
Anyone? Anyone? Bueller? Bueller?
M
HA! Solved! I knew it was possible! (Thanks Async-games.com support!)
Here's how to play iPod music in your app, with background support, and with your app receiving the remote control notifications.
You have to use AVPlayer (but not AVAudioPlayer. No idea why that is!) initialized with the asset URL from the MPMediaItem you got from the library picker (or current item in the MPMusicPlayerController or wherever), then set the audio session's category to Playable (do NOT enable the mixing override or you'll lose the remote events!) and add the appropriate keys to your info.plist file telling the OS your app wants to support background audio.
Done and done!
This lets you play items from your iPod library (except Audible.com files for some reason!) in the background and still get remote events. Granted since this is your audio player which is separate from, and will interrupt the iPod app, you have to do more work, but those are the breaks!
Damn though... I just wished it worked with Audible.com files. (For those interested, the reason it doesn't is the asset URL for an audible file returns nil. Stinks! But what can ya do!)
This is probably not going to be of any use anymore for the OP, but as it may be for people finding this page through googling, I will post it anyway.
An alternative (but rather ugly) approach, if you are only interested in the music remote control events and still want to be able to play the audible.com files...
Just keep using the MPMusicPlayer and track its notifications (now playing and state changed). To keep receiving these notifications in the background, you can do the "background thread magic" described in various places to keep your app from being suspended. You are not going to receive the remote controls directly (as the iPod player is receiving them), but by tracking the changes in "now playing" you can infer the ControlPreviousTrack and ControlNextTrack events, and by tracking the playbackState, you can infer the TogglePlayPause command.
The downside is that you are app is going to be running at all times for no good reason (although, to be fair, if iOS is programmed correctly, a background thread doing nothing should consume almost no battery).
Another alternative: use a MPMoviePlayer? I have checked that it works fine in the background, and should receive remote control events as well. It can play MPMediaItem natively, so hopefully the Audible.com files as well...
There is no way around this. If the users iPod app is playing an iPod selection, then all remote events are going to go to the iPod, not your app.
One think I noticed about MPMediaItemPropertyAssetURL is that, although the object returned is in NSURL but the absoluteString is something like this:
ipod-library://item/item.mp3?id=580807975475997900
Which is not what AVAudioPlayer want. What AVAudioPlayer want is NSURL object that is created from a file with a valid file path.
And I have no idea how to get file path from MPMediaItem. So I guess maybe AVPlayer is the way to go if you want to play iPod track without using MPMusicPlayer.