How reduce background noise while recording in iphone? - iphone

I am planning to compare two audio files. i have recorded two voices and compared them using cross correlation. since the presence on background noise while recording the resulting correlation value is always near 0.5.If i give any recorded waves from internet , i am able to get the correct value. So how can i reduce the background noise while recording.Please guide me .Thanks.
Is there any possibility to reduce noise from the recorded .wav file?

Not really a good way to remove background noise while recording other than simply turning down the recording mic. It seems that you're already exporting the data, so instead of attempting to eliminate it at the source, it'd probably be easier for you filter it out, this is a quick tutorial on how to eliminate background noise using Audacity

Try using a noise-canceling microphone and an anechoic chamber (or really quiet studio).
You can also try filtering out all frequency bands that are not of interest (e.g. are outside the human vocal spectral range required for recognition or your comparison).

There is a special kind software to remove background audio noises. Audacity for PC might be one of the most popular (it is open source by the way). Also, you could do that entirely on iPhone with the Denoise app.
Basically, recorded audio contains a lof of individual sounds and some of them are harmful (a.e. buzzing lamps, street noises, mic issues, etc.). A very popular approach to suppress harmful sounds is to calculate frequency spectre of harmful noises and supress "harmful" frequencies throughout the whole audio duration. In both, Audacity and Denoise you do that by selecting a "noise only" fragment. All sounds in that fragment are considered to be noises and are suppressed in whole file.
If you need to incorporate this feature into your app then you could have a look at the Audacity sources. Please let me know if you need more details.

Related

Are there open source libraries to build audio visualizations for iOS?

I have an equalizer view with 10 bars in OpenGL ES which can light up and down. Now I'd like to drive this equalizer view from the background music that is playing in iOS.
Someone suggested what I need is a Fast Fourier Transform to transform the audio data into something else. But since there are so many audio visualizations floating around, my hope is that there is an open source library or anything that I could start with.
Maybe there are open source iOS projects which do audio visualization?
Yes.
You can try this Objective-C library which I wrote for exactly this purpose. What it does is to give you an interface for playing files from URLs and then getting real-time FFT and waveform data so that you can feed it with your OpenGL bars or whatever graphics you're using to visualise the sound. It also tries to deliver very accurate results.
If you want to do it in swift, you can take a look at this example which is cleaner and also shows how it's possible to actually draw the equalizer view.

Reduced quality OpenGL ES screenshots (iPhone)

I'm currently using this method from Apple to take screenshots of my OpenGL ES iPhone game. The screenshots look great. However taking a screenshot causes a small stutter in the game play (which otherwise runs smoothly at 60 fps). How can I modify the method from Apple to take lower quality screenshots (hence eliminating the stutter caused by taking the screenshot)?
Edit #1: the end goal is to create a video of the game play using AVAssetWriter. Perhaps there's a more efficient way to generate the CVPixelBuffers referenced in this SO post.
What is the purpose of the recording?
If you want to replay a sequence on the device you can look into saving the object positions etc instead and redraw the sequence in 3D. This also makes it possible to replay sequences from other view positions.
If you want to show the game play on i.e. youtube or other you can look into recording the game play with another device/camera or record some game play running in the simulator using some screen capture software as ScreenFlow.
The Apple method uses glReadPixels() which just pulls all the data across from the display buffer, and probably triggers sync barriers, etc, between GPU and CPU. You can't make that part faster or lower resolution.
Are you doing this to create a one-off video? Or are you wanting the user to be able to trigger this behavior in the production code? If the former, you could do all sorts of trickery to speed it up-- render to a smaller size for everything, don't present at all and just capture frames based on a recording of the input data running into the game, or other such tricks, or going even further run that whole simulation at half speed to get all the frames.
I'm less helpful if you need an actual in-game function for this. Perhaps someone else will be.
If all else fails.
Get one of these
http://store.apple.com/us/product/MC748ZM/A
And then convert that composite video to digital through some sort of external device.
I've done this when I converted vhs movies to dvd a long time ago.

How to detect music note and chords programmatically (in iPhone SDK)?

How to detect music note and chords programmatically (in iPhone SDK) ??
You would need to do a Fourier transform (usually a FFT) of the incoming sound wave and find the dominant frequency, then look up that frequency in a table of notes with their corresponding frequencies.
FFT is part of iOS since iOS4 and is located in the Accelerate framework.
Look at this other SO thread for more info and sample code.
To detect a chord is done by the same principle, but it's much trickier, since you need to find all the notes that make up the chord.

How to capture a motion in iphone camera

In My application as the user opens the camera, camera should capture a image as soon as there is a difference in image when compared to previous image and camera should always be in capturing mode.
This should be done automatically without any user interaction.Please Help me out as i couldn't find the solution asap.
Thanks,
ravi
I don't think the iPhone camera can do what you want.
It sounds like your doing a type of motion detection by comparing two snapshots taken at different times and seeing if something has changed between the older and the newer image. To that you need:
I don't think the iPhone can do what you want. The camera is not setup to automatically take photos and I don't think the hardware can support the level of processing needed to compare two images in enough detail to detect motion.
Hmmmm, in thinking about it, you might be able to detect motion by somehow measuring the frame differentials in the compression of video. All the video codecs save space by only registering the parts of the video that change from frame-to-frame. So, a large change in the saved data would indicate a large change in the environment.
I have no idea how to go about doing that but it might give you a starting point.
You could try using opencv for motion detection based on differences between captured frames but I'm not sure if the iPhone API allows reading multiple frames from the camera.
Look for motempl.c in the opencv distribution.
You can do a screenshot to automatically capture the image, using the UIGetScreenImage function.

Motion detection using iPhone

I saw at least 6 apps in AppStore that take photos when detect motion (i.e. a kind of Spy stuff). Does anybody know what is a general way to do such thing using iPhone SDK?
I guess their apps take photos each X seconds and compare current image with previous to determine if there any difference (read "motion"). Any better ideas?
Thank you!
You could probably also use the microphone to detect noise. That's actually how many security system motion detectors work - but they listen in on ultrasonic sound waves. The success of this greatly depends on the iPhone's mic sensitivity and what sort of API access you have to the signal. If the mic's not sensitive enough, listening for regular human-hearing-range noise might be good enough for your needs (although this isn't "true" motion-detection).
As for images - look into using some sort of string-edit-distance algorithm, but for images. Something that takes a picture every X amount of time, and compares it to the previous image taken. If the images are too different (edit distance too big), then the alarm sounds. This will account for slow changes in daylight, and will probably work better than taking a single reference image at the beginning of the surveillance period and then comparing all other images to that.
If you combine these two methods (image and sound), it may get you what you need.
You could have the phone detect light changes meaning using the sensor at the top front of the phone. I just don't know how you would access that part of the phone
I think you've about got it figured out- the phone probably keeps images where the delta between image B and image A is over some predefined threshold.
You'd have to find an image library written in Objective-C in order to do the analysis.
I have this application. I wrote a library for Delphi 10 years ago but the analysis is the same.
The point is to make a matrix from whole the screen, e.g. 25x25, and then make an average color for each cell. After that, compare the R,G,B,H,S,V of average color from one picture to another and, if the difference is more than set, you have motion.
In my application I use fragment shader to show motion in real time. Any question, feel free to ask ;)