Are there open source libraries to build audio visualizations for iOS? - iphone

I have an equalizer view with 10 bars in OpenGL ES which can light up and down. Now I'd like to drive this equalizer view from the background music that is playing in iOS.
Someone suggested what I need is a Fast Fourier Transform to transform the audio data into something else. But since there are so many audio visualizations floating around, my hope is that there is an open source library or anything that I could start with.
Maybe there are open source iOS projects which do audio visualization?

Yes.
You can try this Objective-C library which I wrote for exactly this purpose. What it does is to give you an interface for playing files from URLs and then getting real-time FFT and waveform data so that you can feed it with your OpenGL bars or whatever graphics you're using to visualise the sound. It also tries to deliver very accurate results.
If you want to do it in swift, you can take a look at this example which is cleaner and also shows how it's possible to actually draw the equalizer view.

Related

Which formats can I use for animated 3D objects

.. in order to use in augmented reality application on iphone?
I have an app working like that; camera detects the marker and then it places an object related to that marker. However the object is not animated. It stands there. Of course i can move the object programmatically but i don't want to do that. What i want is the object has animation itself. I searched but i can't find exact file format. There are .obj files (not animated itself, is it true?), .mtl files, .anm files .etc. If the format is one of them, then can you give me an example model?
Thanks
Definitely MD2 is your best choice. I have integrated jPCT-AE and Vuforia and it works like a charm. AFAIK MD2 is best for animated models mainly because it can store the key-frame animation in itself and you can have any kind of animation with it at almost no cost.
Here is a test video:
http://youtu.be/chsHh0pEhzw
If you have more question do not hesitate ;)
You should specify what AR SDK/platform you are using to create the iPhone application you are talking about. That being said, for many of the common AR SDK available the MD2 format is often used to display animated models (either with a built in render engine or with example code that shows how to use the MD2 format with that SDK):
http://en.wikipedia.org/wiki/MD2_(file_format)
http://www.junaio.com/develop/docs/documenation/general/3dmodels/
OBJ (Wavefront) files do not support animation.

Reduced quality OpenGL ES screenshots (iPhone)

I'm currently using this method from Apple to take screenshots of my OpenGL ES iPhone game. The screenshots look great. However taking a screenshot causes a small stutter in the game play (which otherwise runs smoothly at 60 fps). How can I modify the method from Apple to take lower quality screenshots (hence eliminating the stutter caused by taking the screenshot)?
Edit #1: the end goal is to create a video of the game play using AVAssetWriter. Perhaps there's a more efficient way to generate the CVPixelBuffers referenced in this SO post.
What is the purpose of the recording?
If you want to replay a sequence on the device you can look into saving the object positions etc instead and redraw the sequence in 3D. This also makes it possible to replay sequences from other view positions.
If you want to show the game play on i.e. youtube or other you can look into recording the game play with another device/camera or record some game play running in the simulator using some screen capture software as ScreenFlow.
The Apple method uses glReadPixels() which just pulls all the data across from the display buffer, and probably triggers sync barriers, etc, between GPU and CPU. You can't make that part faster or lower resolution.
Are you doing this to create a one-off video? Or are you wanting the user to be able to trigger this behavior in the production code? If the former, you could do all sorts of trickery to speed it up-- render to a smaller size for everything, don't present at all and just capture frames based on a recording of the input data running into the game, or other such tricks, or going even further run that whole simulation at half speed to get all the frames.
I'm less helpful if you need an actual in-game function for this. Perhaps someone else will be.
If all else fails.
Get one of these
http://store.apple.com/us/product/MC748ZM/A
And then convert that composite video to digital through some sort of external device.
I've done this when I converted vhs movies to dvd a long time ago.

how can i implement a meter just like analog sound meter?

I need to implement a meter just like sound meter for my iPhone app ,but I am little bit confuse about its implementation.
Please help how can I implement it?
The source code for the Android sound recorder's VU Meter is available online and should get you started.
You may use OpenGL based simple and easily extensible data visualization framework gl-data-visualization-view to display UV meter. You should just add GLDataVisualizationView, set visualization type as VU meter and set value to visualize. There are also iPhone sample project which shows how to use framework.

How to do realtime face detection?

how can I realize realtime face detection, when I use iPhone camera to take picture?
just like the example: http://www.morethantechnical.com/2009/08/09/near-realtime-face-detection-on-the-iphone-w-opencv-port-wcodevideo/ (this example don't provide the .xcodeproj, so I can't compile .cpp file)
another example: http://blog.beetlebugsoftware.com/post/104154581/face-detection-iphone-source
(can't be compiled)
do you have any solution? please give a hand!
Wait for iOS 5:
Create amazing effects in your camera
and image editing apps with Core
Image. Core Image is a
hardware-accelerated framework that
provides an easy way to enhance photos
and videos. Core Image provides
several built-in filters, such as
color effects, distortions and
transitions. It also includes advanced
features such as auto enhance, red-eye
reduction and facial recognition.

How to capture a motion in iphone camera

In My application as the user opens the camera, camera should capture a image as soon as there is a difference in image when compared to previous image and camera should always be in capturing mode.
This should be done automatically without any user interaction.Please Help me out as i couldn't find the solution asap.
Thanks,
ravi
I don't think the iPhone camera can do what you want.
It sounds like your doing a type of motion detection by comparing two snapshots taken at different times and seeing if something has changed between the older and the newer image. To that you need:
I don't think the iPhone can do what you want. The camera is not setup to automatically take photos and I don't think the hardware can support the level of processing needed to compare two images in enough detail to detect motion.
Hmmmm, in thinking about it, you might be able to detect motion by somehow measuring the frame differentials in the compression of video. All the video codecs save space by only registering the parts of the video that change from frame-to-frame. So, a large change in the saved data would indicate a large change in the environment.
I have no idea how to go about doing that but it might give you a starting point.
You could try using opencv for motion detection based on differences between captured frames but I'm not sure if the iPhone API allows reading multiple frames from the camera.
Look for motempl.c in the opencv distribution.
You can do a screenshot to automatically capture the image, using the UIGetScreenImage function.