Is there a way to get access to the master mixer or other devices/channels via the web audio api? - web-audio-api

Is there a way to record the audio being currently mixed down (possibly from another tab/process) on the hardware? Is there a way to input/connect to the browsers mixer?
Studio hardware usually has several input/output channels, mono and/or stereo; is there a way to get these connected onto the graph? Is there/will there be some device enumeration api?

The closest thing That you might be able to do is get data from the microphone, and set the system microphone to your system's output (In windows... Manage Audio Devices > Recording > Stereo Mix). Then just use getUserMedia to get the audio.
navigator.webkitGetUserMedia({audio: true}, function(stream) {
var microphone = context.createMediaStreamSource(stream);
});

Related

Record from specific microphone channel

I am looking for a way to record microphone input from a specific channel.
For example, I want to record separately left/right of an M-Track audio interface or SingStar wireless microphones.
Microphone.Start seems limited in this regard.
Further, I found this thread, which says
Yes, you can assume the microphone at runtime will always be 1 channel.
My questions:
Any workaround to get this working?
It there maybe an open source lib or low level microphone API in Unity?
Is it really not possible to record different microphone channels into different AudioClips in Unity?
I am looking for a solution at least on desktop platform, i.e., Windows, MacOS, Linux.
Bonus question: Is recording specific microphone channels working in Unreal Engine?

Hololens 2 audio stream from Desktop

I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.

Web Audio API microphone EQ

Does anyone know if it is possible to add EQ / effects filtering to the live microphone input within a html5 enabled web page?
To be clear I'm not talking about adjusting the sound of a pre-recorded wav/mp3 playing in the HTMl5 player, there are plenty of those, I am talking about adjusting the microphone characteristics in live time while the stream is being captured from the chosen mic source.
Has anyone managed to do this?
Thank you.

chrome speech recognition WebKitSpeechRecognition() not accepting input of fake audio device --use-file-for-fake-audio-capture or audio file

I would like to use chrome speech recognition WebKitSpeechRecognition() with the input of an audio file for testing purposes. I could use a virtual microphone but this is really hacky and hard to implement with automation, but when I tested it everything worked fine and the speechrecognition converted my audio file to text. now I wanted to use the following chrome arguments:
--use-file-for-fake-audio-capture="C:/url/to/audio.wav"
--use-fake-device-for-media-stream
--use-fake-ui-for-media-stream
This worked fine on voice recorder sites for example and I could hear the audio file play when I replayed the recording. But for some reason when I try to use this on WebKitSpeechRecognition of chrome then it doesn't use the fake audio device but instead my actual microphone. Is there any way I can fix this or test my audio files on the website? I am using C# and I couldn't really find any useful info on automatically adding, managing and configuring virtual audio devices. What approaches could I take?
Thanks in advance.
Well it turns out this is not possible because chrome and google check if you are using a fake mic ect, they do this specifically to prevent this kind of behavior so people cannot get free speech to text. There is a paid api available from google (first 60 minutes per month are free)

Multi-channel audio input to unity

I wanted to know if Unity3D supports multi-channel audio input. I am trying to use an audio interface to input and process stereo audio in Unity.
Thanks!
As far as I know, Unity supports multiple simultaneous input devices (see Microphone), but assumes only one channel per device. This a a bit of a limitation for multi channel device users.
I worked around it for my project by routing each input channel from my sound card to a virtual device, so that each device that Unity interacts with contains only a single input channel. You can do this using Jack or VoiceMeeter, for example.