Multi-channel audio input to unity - unity3d

I wanted to know if Unity3D supports multi-channel audio input. I am trying to use an audio interface to input and process stereo audio in Unity.
Thanks!

As far as I know, Unity supports multiple simultaneous input devices (see Microphone), but assumes only one channel per device. This a a bit of a limitation for multi channel device users.
I worked around it for my project by routing each input channel from my sound card to a virtual device, so that each device that Unity interacts with contains only a single input channel. You can do this using Jack or VoiceMeeter, for example.

Related

Record from specific microphone channel

I am looking for a way to record microphone input from a specific channel.
For example, I want to record separately left/right of an M-Track audio interface or SingStar wireless microphones.
Microphone.Start seems limited in this regard.
Further, I found this thread, which says
Yes, you can assume the microphone at runtime will always be 1 channel.
My questions:
Any workaround to get this working?
It there maybe an open source lib or low level microphone API in Unity?
Is it really not possible to record different microphone channels into different AudioClips in Unity?
I am looking for a solution at least on desktop platform, i.e., Windows, MacOS, Linux.
Bonus question: Is recording specific microphone channels working in Unreal Engine?

Hololens 2 audio stream from Desktop

I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.

Resonance Audio, Output to Speakers

Just started digging around Google's Resonance Audio for Unity and it has promise in headphones.
But I am interested in using for a speaker setup. I have an ambisonic decoder interface and a speaker array that takes b-format signals. Is there a way to output a 4-channel / b-format signal directly from Unity so I can monitor Resonance's soundfield in loudspeakers?
At the moment, using SuperCollider / ATK with Unity via OSC for a custom sound engine to allow ambisonic playback in a speaker array. Works well, but would like to take advantage Google's new tools.
Outputting a 4-channel b-format signal directly from Unity using the Resonance Audio SDK is only supported when saving directly to a file (.ogg). Streaming the signal directly from Unity to an external ambisonic decoder interface is not currently supported.
If you are interested in the option to record a scene in ambisonic format and save to a file, there are some scripting functions labeled "SoundfieldRecorder" that may help automate the monitoring process, e.g. saving to a custom file path. Ambisonic soundfield recording can also be done manually using the Resoncance Audio Listener component in Unity, which has a Soundfield Recorder UI button to start and stop/save a recording.

Apply live effects on audio being played - swift for macOS

It is now a few month that I'm experimenting the possibilities of audio manipulation on the mac via Xcode and swift.
I use AvFoundation's AvAudioEngine to apply audio effects and play an audio file. It is working fine but I would like to go one step further and apply theses effects to the audio being played on a specific audio device (with whatever application). I'am using two audio device (the first is 6in/6out, the second 2out) and would like to be able to select on which device are effects applied.
Is it possible to do that using AVAudioEngine? What about AvAudioMixerNode? Is it like a real mixer, with audio inputs, outputs, sends&returns, aux,..? What about AuGraph on the mac? Is it possible to combine two different classes to do the job?
I'm looking for examples but primarily more information on the general way audio programming works under macOS.
Thank You.

Is there a way to get access to the master mixer or other devices/channels via the web audio api?

Is there a way to record the audio being currently mixed down (possibly from another tab/process) on the hardware? Is there a way to input/connect to the browsers mixer?
Studio hardware usually has several input/output channels, mono and/or stereo; is there a way to get these connected onto the graph? Is there/will there be some device enumeration api?
The closest thing That you might be able to do is get data from the microphone, and set the system microphone to your system's output (In windows... Manage Audio Devices > Recording > Stereo Mix). Then just use getUserMedia to get the audio.
navigator.webkitGetUserMedia({audio: true}, function(stream) {
var microphone = context.createMediaStreamSource(stream);
});