Web Audio API microphone EQ - web-audio-api

Does anyone know if it is possible to add EQ / effects filtering to the live microphone input within a html5 enabled web page?
To be clear I'm not talking about adjusting the sound of a pre-recorded wav/mp3 playing in the HTMl5 player, there are plenty of those, I am talking about adjusting the microphone characteristics in live time while the stream is being captured from the chosen mic source.
Has anyone managed to do this?
Thank you.

Related

Photon Voice - Combine Recorder and Audio Source (Karaoke App)

I'm developing a karaoke application but I'm having difficulty getting the audio from the mic and the audio from the music to sync on clients.
My solution right now is using Photon Voice to end the mic (as normal) and the music is being transferred the same way but uses a Factory and IAudioPusher - Unfortunately the audio isn't in sync, I guess I was optimistic in thinking it would be!
I'm looking for advice on how I can achieve this, maybe if there's a way to combine AudioSource's in real time? If not, if anyone has any other suggestions then they'd be much appreciated.
Thanks.

Google Assistant for voice-input game

I'd like to develop a game/skill on Google Assistant that requires the following, once the user has entered the game/session (“hey Google, start game123”)
playing an audio file that is a few minutes long
playing a second audio file while the first clip is still playing
always listening. While the files are playing, the game needs to listen and respond for specific voice phrases without the “Hey Google” keyword.
Are these capabilities supported? Thanks in advance.
"Maybe." A lot of it depends what devices on the Actions on Google platform you're looking to support and how necessary some of the requirements are. Depending on your needs, you may be able to play some tricks.
Playing an audio file that is "a few minutes" long.
You can play audio using SSML that is up to 120 seconds long. But that will be played before the microphone is opened to accept a response.
For longer files, you can use a Media Response. This has the interesting feature that when the audio finishes, an event will be sent to your server, so you have some limited way to handle timed responses and looping. On the downside - users have to say "Hey Google" to interrupt it. (And there are currently some bugs when using it.)
Since you're doing a game, you can take advantage of the Interactive Canvas. This will let you use things such as the HTML <audio> tag and the Web Audio API. The big downside is that this is only available on Smart Displays and Android devices - you can't use it on Smart Speakers.
Playing multiple audio tracks
Google has an extension to SSML that allows parallel audio tracks for multiple spoken and audio output. But you can't layer these on top of a Media Response.
If you're using the Web Audio API with the Interactive Canvas, I believe it supports multiple simultaneous inputs.
Can I leave the microphone open so they don't have to say "Hey Google" every time.
Probably not, but this may not be a good idea in some cases, anyway.
For Smart Speakers, you can't do this. People are used to something conversational, so they're waiting for the silence to know when they should be saying something. If you are constantly providing audio, they don't necessarily know when it is their "turn".
With the Interactive Canvas devices, we have a display that we can work with that cues them. And we can keep the microphone open during this time... at least to a point. The downside is that we don't know when the microphone is open and closed, so we can't duck the audio during this time. (At least not yet.)
Can I do what I want?
You're the only judge of that. It sounds like the Interactive Canvas might work well for your needs - but won't work everywhere. In some cases, you might be able to determine the capabilities of the device the user is playing with and present slightly different games depending on the features you have. Google does this, for example, with their "Lucky Trivia" game.

MPEG Dash streaming on multiple screen using EXOplayer

I have a requirement to play the MPEG Dash stream on multiple screen. So I want to play same stream on different devices using EXO player.
If anyone have any idea, how to play same MPEG DASH stream on multiple screen please get back to me.
Thank You,
If you are new to MPEG-DASH I suggest to read some introducing articles, like http://www.streamingmedia.com/Articles/Editorial/What-Is-.../What-is-MPEG-DASH-79041.aspx, and https://www.bitcodin.com/blog/2015/04/mpeg-dash/.
Next step could be to define which platforms will be supported and if you are going for a native App, like ExoPlayer, browser based playout, like dash.js, or bitmovin, or make use of the native MPEG-DASH support of some devices, like some Smart TVs.
Once you share that knowledge and what is the goal of your efforts, people on this platform can help you with more details.

Recording from radio streaming xcode 4

I'm building an iPhone app which play some radio over internet ..
I want to make user able to record audio from my application to the radio channel he wants ..
what framework I must use .. and what code for implementation please ?
all Regards
Unfortunately, there is no way to directly capture from the "audio bus". You can either capture the audio via the internal microphone or headset microphone, but that's it. If you are rendering the audio, you could obviously also write that audio out to a file as well at the same time. That's pretty much your only option.

How can I record currently playing audio on the iPhone?

I'd like to record what the iPhone is currently outputting. So I'm thinking about recording audio from Apps like Music (iPod), Skype, any Radio Streaming App, Phone, Instacast... I don't want to record my own audio or the mic input.
Is there an official way to do this? How do I do it? It seems like AVAudioRecorder does not allow this, can somebody confirm?
Officially you can't. The audio stream belongs to the app playing it ,and iOS.
The Sandbox paradigm means that a resource owned by your App can't be used by another App. Resource here means Audio/Video stream or file. Exceptions are when a mediator like Document interaction controller are used.
If you want to do this you'd have to start with deducing AVFoundation's private methods and find out if theres a way there. Needless to say this it wouldn't be saleable on the App store and will probably only be possible on a jailbreak.
Good Luck.
TLDR;
This is only feasible only from time to time, as it's a time expensive process.
You can record the screen while listening your songs on Spotify, Music or whatever music application.
This will generate a video on your Photos application. That video can be converted on MP3 from your computer.
Actually, this is not true. The screen recordings will not actually have the audio from Apple Music at all, as it blocks it. Discord also uses this pipe as well, so you cannot record Discord audio either this way.