Beat Detektion: Detecting audio from app and produce a wave or bar for that audio... Just like Winamp or Windows Media Player - iphone

I am trying to understand the concept of Beat Detektion and I found that it works on the basis of detecting sound through Microphone. So, my first question is will it not be a disadvantage if i am detecting sound from Microphone? Because when we are using the device it happens that other sound from environment is also there so the actual beat will not produce for sound.
My second question (actually where i got stuck) I found that this Beat Detektion is not able to access iPod Library. Will i be able to play beats if i fetch the song from ipod Library in my app and then i use with beat detektion.
http://www.cubicproductions.com/index.php?option=com_content&view=article&id=67&Itemid=82
http://www.gearslutz.com/board/product-alerts-older-than-2-months/457617-beatdetektor-iphone-app-open-source-algorithm-bpm-detection.html
I will be very thankful if any reference/link other then above provided to understand Beat Detektion more...
Edit 1
I have got the code for the above from this link But this code is in C++ and there it is written that we have to convert the code to XCODE project using CMAKE software. I am somehow able to convert the code to xcode project but then i am only having cpp files then how should i run the program in iphone???

ok I am somehow able to solve my problem with the Apple's sample code : AurioTouch
I have implemented song in that example and produced the beats of the song on the basis of the beats.. In Iphone we can access sound beat using mic only. So aurioTouch uses same for beat detection

Related

How to encode artist name image to audio recording in iOS?

I am working for Music app in that one of the feature is recording user voice and playback the same. So far all things are in control. Yesterday I got a thought and straight away I started Googling, the idea is adding artist names and album image to my recorded audio using AVAudioRecorder, But there is not much success in it.
I also seen AV Foundation Audio Settings Constants to set the AVAudioRecorder settings, failed in this also.
You probably can use an existing audio tagging library, so after creating the file, you can use it to add the required data. I did a quick search and found this libraries:
SonatinaTag: It was made for OSX but it may work for iOS (The project state that has very few external requirements). Not sure if support writing.
TagLib-ObjC: A wrapper for the popular TagLib. Seems to be in development.
TagLib: TagLib, I know is just pure C pain, but maybe is not that hard to use.
Good luck!

How do I record a video of my iPhone app from the simulator?

I am using OSX mountain lion, and I have been able to record a video using quick time to screen capture, but it does not record the sound generated from the iOS Simulator, but rather from the microphone.
I want to record audio and video from the iOS Simulator.
You could:
- try a professional screen recording software (Camtasia, Screenflow,...)
- use a virtual sound output device that captures the sound and writes it to disk
- connect your sound output to your input (using a Cinch cable)
See http://bit.ly/UXBJ9N for more info on the latter two.
I ended up using sound flower (which I can't praise enough - it was much more simple than I expected - and tiny app size too) to capture audio, and a random screen capture utility to capture video, then I married them up in a video editing application. Not perfect, but it works.
I will post here if I find a simpler solution, because despite all the blog posts about this matter, I could not find something ideal.

How to play record the sound programmatically and how to play that recorded audio?

I am developing one application. In that I want to record the sounds and I want to play that recorded sound file. I know the frameworks for doing this. But how to develop programmatically by using that frameworks?
You can refer to this link:
I have implemented this code in one of my apps and it works completely fine.
How do I record audio on iPhone with AVAudioRecorder?
For Playing the sound you have option to use AVAudioRecorder.
Hope this helps.
The best way to do it - and I am talking from painful experience here - is with the RemoteIO audio unit. You can also do it with AudioQueue, but it has a higher latency, and the queue type approach becomes very problematic.
So, I think that they are really different tools for different jobs. Note that you won't play a sound file as such. You will play the contents of a buffer held in memory. As long as the buffer is not too large, this should not be an issue.
So, going with RemoteIO, you will find this blog and tutorial very useful. It includes code samples.
Using RemoteIO audio unit By MICHAEL TYSON

Changing Beats per Minute of selected song

I am working on an iPhone application where I want to change the Beats Per Minute (BPM) of the selected song. I am using AVAsset classes for that.
How it can be implemented? Any code help would be appreciated.
I recently started toying with this, what you need to do is use AudioToolbox to read audio data, AudioToolbox allows you to get the data of most popular file types.
Then use something like "SoundTouch" ( just google this, its opensource ), a C++ library that allows you to stretch soundfiles, it even comes with an example that shows you how to do exactly this. Tried it, works on iOS.
You will need AVAssetReader if you want to access iPod songs and you can only test that code with a real world device and not with the simulator as the simulator has no iPod library.

Record iPhone app video (without simulator)

How can I record a video of an iPhone app? I can't use the Simulator because the application is very OpenGL-heavy and uses an accelerometer/gyroscope.
You can have the iphone output video and capture on another device: How can I use MPTVOutWindow iPhone undocumented class?
one of the links in that answer says it doesn't work in iOS4+, but on a project I worked on less than one month ago, we used that feature from an iPhone4 to present, so I would challenge that (unless the developer that handled that portion used another approach)
I'm not sure there is a "native" solution here, short of building video capture into your actual app.
The cleanest way of handling this, assuming your game/app has a cleanly designed input pipeline, is probably to mock the input:
Put in (debug-only code) that lets you "record" all the raw input events.
Using the device, play out the demo session to create a "recording".
Run the app in the simulator, and feed it the input "recording" you made on the device.
The simulator will run GL stuff just fine, and probably at a higher framerate than your device will.