Recently, I'm working on a project using the build-in microphones recording the stereo sound. And then doing some signal processing. However, it seems like there is no specific solution to this question.
There is a link showing that it is quite reasonable only using the build-in microphones doing the stereo recording.
https://audioboo.fm/boos/1102187-recording-in-stereo-from-the-iphone-5#t=0m20s
However, I still do not know how to do it! Is there someone solving this problem?
There are some resources showing how to access different build-in mic.
use rear microphone of iphone 5
Also, it may quite easy using an Android phone implementing this project.
How to access the second mic android such as Galaxy 3 ,
How can I capture audio input from 2 mics of my android phone real time and simultaneously
Related
I am looking for a way to record microphone input from a specific channel.
For example, I want to record separately left/right of an M-Track audio interface or SingStar wireless microphones.
Microphone.Start seems limited in this regard.
Further, I found this thread, which says
Yes, you can assume the microphone at runtime will always be 1 channel.
My questions:
Any workaround to get this working?
It there maybe an open source lib or low level microphone API in Unity?
Is it really not possible to record different microphone channels into different AudioClips in Unity?
I am looking for a solution at least on desktop platform, i.e., Windows, MacOS, Linux.
Bonus question: Is recording specific microphone channels working in Unreal Engine?
I would like to use chrome speech recognition WebKitSpeechRecognition() with the input of an audio file for testing purposes. I could use a virtual microphone but this is really hacky and hard to implement with automation, but when I tested it everything worked fine and the speechrecognition converted my audio file to text. now I wanted to use the following chrome arguments:
--use-file-for-fake-audio-capture="C:/url/to/audio.wav"
--use-fake-device-for-media-stream
--use-fake-ui-for-media-stream
This worked fine on voice recorder sites for example and I could hear the audio file play when I replayed the recording. But for some reason when I try to use this on WebKitSpeechRecognition of chrome then it doesn't use the fake audio device but instead my actual microphone. Is there any way I can fix this or test my audio files on the website? I am using C# and I couldn't really find any useful info on automatically adding, managing and configuring virtual audio devices. What approaches could I take?
Thanks in advance.
Well it turns out this is not possible because chrome and google check if you are using a fake mic ect, they do this specifically to prevent this kind of behavior so people cannot get free speech to text. There is a paid api available from google (first 60 minutes per month are free)
It is now a few month that I'm experimenting the possibilities of audio manipulation on the mac via Xcode and swift.
I use AvFoundation's AvAudioEngine to apply audio effects and play an audio file. It is working fine but I would like to go one step further and apply theses effects to the audio being played on a specific audio device (with whatever application). I'am using two audio device (the first is 6in/6out, the second 2out) and would like to be able to select on which device are effects applied.
Is it possible to do that using AVAudioEngine? What about AvAudioMixerNode? Is it like a real mixer, with audio inputs, outputs, sends&returns, aux,..? What about AuGraph on the mac? Is it possible to combine two different classes to do the job?
I'm looking for examples but primarily more information on the general way audio programming works under macOS.
Thank You.
I want to record a video of my application which is running on my iPhone and use it for Replay. How can I do that?
I was searching for this and could't find any solution.
Your only options are:
1. Run the app in the simulator and use any screen capture tool for mac to record a video
2. Fix the iPhone on a pod/stand and record video of you app with another camera
If the app does not have any feature which is available only on the device i would avoid option no 2, because iPhone's glossy screen shows all kinds of reflections and the user's hand in the video is really annoying..
Well, if your phone is a 4S you can get a dongle from Apple that outputs the video as a HD stream. The dongle lets you plug in a HDMI cable, so you would need some way to capture that output.
I'm building an app that measures sound volume. I understand that audio hardware in the iPhone is not as accurate as professional hardware, which is OK, but I need to know if there are any differences between the different iPhone models. For example, is it possible that the volume measured on an iPhone 3G will be different on an iPhone 4? Unfortunately I do not possess any models earlier than the 4 so I'm unable to test this myself.
The audio frameworks seem to be identical for identical iOS versions (except for the 2G). However the physical microphones (and acoustical environments) are different. People have published various test results, such as:
http://blog.faberacoustical.com/2009/iphone/iphone-microphone-frequency-response-comparison/
and
http://blog.faberacoustical.com/2010/iphone/iphone-4-audio-and-frequency-response-limitations/
But it's possible that the mic response may vary with manufacturing batches as well. YMMV.
I'd just like to add that no matter what I try, there seems to be no way of exporting audio through AVAssetExportSession using an iPhone 3G. It's working with 3GS, iPod Touches, 4, iPad and so on.