I am looking for a way to record microphone input from a specific channel.
For example, I want to record separately left/right of an M-Track audio interface or SingStar wireless microphones.
Microphone.Start seems limited in this regard.
Further, I found this thread, which says
Yes, you can assume the microphone at runtime will always be 1 channel.
My questions:
Any workaround to get this working?
It there maybe an open source lib or low level microphone API in Unity?
Is it really not possible to record different microphone channels into different AudioClips in Unity?
I am looking for a solution at least on desktop platform, i.e., Windows, MacOS, Linux.
Bonus question: Is recording specific microphone channels working in Unreal Engine?
Related
I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.
I'd like to develop a game/skill on Google Assistant that requires the following, once the user has entered the game/session (“hey Google, start game123”)
playing an audio file that is a few minutes long
playing a second audio file while the first clip is still playing
always listening. While the files are playing, the game needs to listen and respond for specific voice phrases without the “Hey Google” keyword.
Are these capabilities supported? Thanks in advance.
"Maybe." A lot of it depends what devices on the Actions on Google platform you're looking to support and how necessary some of the requirements are. Depending on your needs, you may be able to play some tricks.
Playing an audio file that is "a few minutes" long.
You can play audio using SSML that is up to 120 seconds long. But that will be played before the microphone is opened to accept a response.
For longer files, you can use a Media Response. This has the interesting feature that when the audio finishes, an event will be sent to your server, so you have some limited way to handle timed responses and looping. On the downside - users have to say "Hey Google" to interrupt it. (And there are currently some bugs when using it.)
Since you're doing a game, you can take advantage of the Interactive Canvas. This will let you use things such as the HTML <audio> tag and the Web Audio API. The big downside is that this is only available on Smart Displays and Android devices - you can't use it on Smart Speakers.
Playing multiple audio tracks
Google has an extension to SSML that allows parallel audio tracks for multiple spoken and audio output. But you can't layer these on top of a Media Response.
If you're using the Web Audio API with the Interactive Canvas, I believe it supports multiple simultaneous inputs.
Can I leave the microphone open so they don't have to say "Hey Google" every time.
Probably not, but this may not be a good idea in some cases, anyway.
For Smart Speakers, you can't do this. People are used to something conversational, so they're waiting for the silence to know when they should be saying something. If you are constantly providing audio, they don't necessarily know when it is their "turn".
With the Interactive Canvas devices, we have a display that we can work with that cues them. And we can keep the microphone open during this time... at least to a point. The downside is that we don't know when the microphone is open and closed, so we can't duck the audio during this time. (At least not yet.)
Can I do what I want?
You're the only judge of that. It sounds like the Interactive Canvas might work well for your needs - but won't work everywhere. In some cases, you might be able to determine the capabilities of the device the user is playing with and present slightly different games depending on the features you have. Google does this, for example, with their "Lucky Trivia" game.
I wanted to know if Unity3D supports multi-channel audio input. I am trying to use an audio interface to input and process stereo audio in Unity.
Thanks!
As far as I know, Unity supports multiple simultaneous input devices (see Microphone), but assumes only one channel per device. This a a bit of a limitation for multi channel device users.
I worked around it for my project by routing each input channel from my sound card to a virtual device, so that each device that Unity interacts with contains only a single input channel. You can do this using Jack or VoiceMeeter, for example.
Is there a way to record the audio being currently mixed down (possibly from another tab/process) on the hardware? Is there a way to input/connect to the browsers mixer?
Studio hardware usually has several input/output channels, mono and/or stereo; is there a way to get these connected onto the graph? Is there/will there be some device enumeration api?
The closest thing That you might be able to do is get data from the microphone, and set the system microphone to your system's output (In windows... Manage Audio Devices > Recording > Stereo Mix). Then just use getUserMedia to get the audio.
navigator.webkitGetUserMedia({audio: true}, function(stream) {
var microphone = context.createMediaStreamSource(stream);
});
We are in the midst of research stage for our coming web project. We would like to make a website that streams (all) the Local FM Radio Stations.
In research for the right tools to set up the said website, several questions arises.
What software do we need to encode the live broadcast of (all) the Local FM Radio Stations? How can we connect to FM Radio Stations?
Do we need Virtual Private Server to run the software from question number One, 24/7? Can VPS do that, run a software 24/7?
If we manage to encode the live broadcast of (all) the local FM Radio stations, how do we send this thing to our website? Can we use a simple audio player such as quicktime/flash or html5 audio player and embed it to our website?
I hope someone will help us on this matter. You help is greatly appreciated. :)
Audio Capture
The first thing you need to do is set up an encoder source for your streams. I highly recommend putting the encoder at each radio station. The quality of FM radio isn't the greatest. You will get much better audio quality at the station. In addition, at least here in the US, many radio stations have all of their studios in the same place. It isn't uncommon to find 8 stations all coming from the same set of offices. Therefore, you may only have to install equipment in 3 or 4 buildings to cover all the stations in your market.
Most stations these days are using digital mixing. Buy a sound card that has a compatible digital input. AES/EBU and S/PDIF are common, and sound cards that support these are affordable.
If you must capture audio over the air, make sure you are using high quality receivers (digital where available), with a high quality outdoor antenna. There are a variety of receivers you can purchase, many of which mount directly in a rack.
Encoding
Now for the actual encoding, you need software. I've always had good luck with EdCast (if you can find the version prior to "EdCast Reborn"). SAM is a good choice for stations that have their own music library they need to manage, but I don't suggest it in your case. You can even use VLC for this part.
You will need to pick a good codec. If you want compatibility with HTML5, you will want to encode in MP3 and Ogg Vorbis. aacPlus is a good choice for saving bandwidth while still providing a decent audio quality. Most stations these days use aacPlus when possible, but not all browsers can play it, which is why you also need the other two. You can (and should) use multiple codecs per station.
Server Software
I highly recommend Icecast or SHOUTcast. They take your encoded audio and distribute it to listeners. They serve up an HTTP-like stream, which is generally compatible. If you are interested, I also do Icecast/SHOUTcast compatible hosting, with the goal of being compatible with more devices, particularly mobile.
Playback
Many stations these days use a player that tries HTML5, and falls back to Flash if necessary. jPlayer is a common choice, but there are many others. It is also good to provide a link to a playlist file containing the URL of your stream, so that users can listen in their own audio player if they choose.