Just started digging around Google's Resonance Audio for Unity and it has promise in headphones.
But I am interested in using for a speaker setup. I have an ambisonic decoder interface and a speaker array that takes b-format signals. Is there a way to output a 4-channel / b-format signal directly from Unity so I can monitor Resonance's soundfield in loudspeakers?
At the moment, using SuperCollider / ATK with Unity via OSC for a custom sound engine to allow ambisonic playback in a speaker array. Works well, but would like to take advantage Google's new tools.
Outputting a 4-channel b-format signal directly from Unity using the Resonance Audio SDK is only supported when saving directly to a file (.ogg). Streaming the signal directly from Unity to an external ambisonic decoder interface is not currently supported.
If you are interested in the option to record a scene in ambisonic format and save to a file, there are some scripting functions labeled "SoundfieldRecorder" that may help automate the monitoring process, e.g. saving to a custom file path. Ambisonic soundfield recording can also be done manually using the Resoncance Audio Listener component in Unity, which has a Soundfield Recorder UI button to start and stop/save a recording.
Related
I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.
I wanted to know if Unity3D supports multi-channel audio input. I am trying to use an audio interface to input and process stereo audio in Unity.
Thanks!
As far as I know, Unity supports multiple simultaneous input devices (see Microphone), but assumes only one channel per device. This a a bit of a limitation for multi channel device users.
I worked around it for my project by routing each input channel from my sound card to a virtual device, so that each device that Unity interacts with contains only a single input channel. You can do this using Jack or VoiceMeeter, for example.
It is now a few month that I'm experimenting the possibilities of audio manipulation on the mac via Xcode and swift.
I use AvFoundation's AvAudioEngine to apply audio effects and play an audio file. It is working fine but I would like to go one step further and apply theses effects to the audio being played on a specific audio device (with whatever application). I'am using two audio device (the first is 6in/6out, the second 2out) and would like to be able to select on which device are effects applied.
Is it possible to do that using AVAudioEngine? What about AvAudioMixerNode? Is it like a real mixer, with audio inputs, outputs, sends&returns, aux,..? What about AuGraph on the mac? Is it possible to combine two different classes to do the job?
I'm looking for examples but primarily more information on the general way audio programming works under macOS.
Thank You.
I'm currently working on a project where it is necessary to record sound being played by the iPhone. By this, I mean recording sound being played in the background like a sound clip or whatever, NOT using the built-in microphone.
Can this be done? I am currently experimenting with the AVAudioRecorder but this only captures sound with the built-in microphone.
Any help would be appreciated!
This is possible only when using only the Audio Unit RemoteIO API or only the Audio Queue API with uncompressed raw audio, and with no background audio mixed in. Then you have full access to the audio samples, and can queue them up to be saved in a file.
It is not possible to record sound output of the device itself using any of the other public audio APIs.
Just to elaborate on hotpaw2's answer, if you are responsible for generating the sound then you can retrieve it. But if you are not, you cannot. You only have any control over sounds in your process. yes, you can choose to stifle sounds coming from different processes. but you can't actually get the data for these sounds or process them in any way.
I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)