CoreAudio tempo change (iOS) - iphone

I'm very new to audio programming, but I know this must be possible. (This is an iOS/iPhone related question).
How would I go about changing the tempo of a loaded audio file without changing the pitch, and then playing it back?
I think I need to delve into the CoreAudio framework, but I'm not sure where to begin.
If anyone could let me know what classes I need to look at, or the general process involved, that would help me get started and I'd really appreciate it!
Cheers!

This question is highly related: it relates to pitch shifting, rather than time shifting, but I'd check out the comments and links.
Real-time Pitch Shifting on the iPhone

What you are looking for is a time-pitch modification library. Core Audio on iOS currently does not contain such, but there appear to be some 3rd party libraries available (commercially). There are also time pitch tutorials on the web, such as at dspdimention, which require a large amount of DSP development to get working.

Related

How to scratch audio like a DJ?

I have to develop an iPhone app which is a DJ app. It includes everything right from playing song to scratching the audio.
Is there a way I can scratch the audio? Are there any good frameworks which you can suggest? Which are best possible options?
I have referred to this links below but didn't help my cause
http://blog.glowinteractive.com/2011/01/vinyl-scratch-emulation-on-iphone/
Scratching Audio
I'd start with reading Kjetil Falkenberg Hansen's recent PhD thesis: The Acoustics and Performance of DJ Scratching to get to grips with the nature of the problem. This should provide you with some effective parameters for your program.
I imagine you'll want to buffer a certain amount of the audio to be 'scratched' and simply advance through said buffer at varying speeds both forward and backwards.
Consider this link (and similar ones) for how to build a buffer.
If the iphone API doesn't provide a useful way to advance through the buffer at different speeds you might consider making your own temporary buffer, then using this to populate the buffer used by the iPhone based on some interpolating function.
BTW - the first link you posted looks very useful indeed! What's it missing?

iPhone Audio Modifications (filters/effects)

I am developing an app where the user can record their voice, and then alter it in some way. I have implemented OpenAL, and I am able to adjust the pitch to speed up and slow down the audio file. The thing is, I want to add filters like echo, reverb, etc.. I have scoured the internet for hours and have found nothing to help me. I came across a OpenAL called FreeSL, which has a bunch of filters built in, but I cannot get it compile in xcode.
I have also looked into Dirac3, but again all I am seeing is basic pitch/time controls; no echos or anything.
Can anyone point me in the direction a good framework or explain how OpenAL can handle filters like this?
Thanks!
I found a library that is exactly what I am looking for, FMOD:
http://www.fmod.org/index.php/fmod

How to develop an iphone app with reverb functionality?

I am developing an iPhone application (like Audio Processing). I have to give some effect to the audios.
If it is desktop app, many options are there. We can get good examples and full project like audacity. But I want to develop for iPhone.
I got an app with reverb option; (take a look at following link). Just I watch the "video", I did not test this application in my iPhone device.
http://www.appstorehq.com/reverb-iphone-89870/app
My question is; How can I develop the app with reverb functionality ? Is there any documentation for that ? If it is, just share with us.
NOTE: We can use AudioUnit to develop the app with reverb functionality (I am not clear with this.).
EDIT: I don't like to use any third party library.
If anybody having knowledge about this, please share with us.
Thanks.
if yourre targeting ios5 you can just the audio unit subtype kAudioUnitSubType_Reverb2 of the effect audio unit.
reverb unit
AudioComponentDescription auEffectUnitDescription;
auEffectUnitDescription.componentType = kAudioUnitType_Effect;
auEffectUnitDescription.componentSubType = kAudioUnitSubType_Reverb2;
auEffectUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
AUGraphAddNode(
processingGraph,
&auEffectUnitDescription,
&auEffectNode),
Failing that you could just write your own reverb code in the remoteio callback. A simple delay might be easier to do and would sound similar.
iOS 5.0 brings native OpenAL support, so it is now much easier - you don't have to code the algorithm yourself. It also bring support for a variety of reverb spaces:
Small Room
Medium Room
Large Room (2 configurations)
Medium Hall (3 configurations)
Large Hall (2 configurations)
Plate
Medium Chamber
Large Chamber
Cathedral
I suggest that you try the ObjectAL wrapper which already has a great support for the reverb effect:
https://github.com/kstenerud/ObjectAL-for-iPhone
Grab the source from this repository, load "ObjectAL.xcodeproj" and run the ObjectALDemo target on any iOS 5.0 device (should also work on the simulator). This will give you a good starting point and feeling of what the reverb effect is capable of.
If you still don't to use any 3rd party library, you can just grab the relevant pieces from ObjectAL. Look for the reverb-related code in the following source files (and their corresponding headers):
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALListener.m
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALSource.m
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALWrapper.m
Good luck with your project!
AUs are a good place to start.
write your own reverb AU which contains a reverb implementation. there are tons of ways to implement a reverb. a medium/long convolution reverb is much to ask from a phone, but something such as a FDN (feedback delay network) will not require a lot of memory or CPU.
both implementations are easy to implement, if you're familiar with audio programming and optimization. the tough part is actually making one that sounds very good and performs well.
if you're unable to write optimal low level code or you do not (presently) understand basic audio signal processing, then you'll have a few obstacles to overcome -- it may be a long road in that case.
Searching the iOS documentation for "reverb" produces a link to the Core Audio Overview, which references reverb as an "effect unit." Perhaps that's worth further study?
No good, I have attempted the audio unit approach and even though it is in the documentation it is "not" implemented yet by the apple engineers. Each time you call the function to set the reverb property you will only get failure status code. You would have to implement your own reverb effect. Try reading some DSP book and you might find a clue.
you need to learn some DSP-level coding, the DSP cookbook book is okay and there are others out there. But basically you need to be comfortable with handling audio signal in the frequency domain and things such as FFT's. Once you have that, implementing a reverb filter should be straight-forward.
This is an answer I've given before, but I believe it is relevant here. I am going to agree with the others and say that you are going to have to become a bit more familiar with core-audio if you want to do this properly.
I highly recommend this core-audio book. It will teach what you need to do this right and will save you a lot of frustration.
The chapter on audio effects has not been published yet, but if it is anything like the rest of the book it's worth the wait.
EDIT
You will most likely need to do this with an audio effect (which is a form of an audio unit).

Can anyone point me a direction i can follow to learn how to handle audio in xcode

My real objective is to be able to use 1 audio file and create X amount of different pitches and then playing them in the app using some code to handle the timing.
TIA for any helpful insight
You can read Core Audio documentation.
See this answer, which recommends using the AVFoundation framework.
Core Audio is supposed to be fairly low level. Great if you need more flexibility/control, but AVFoundation may be more appropriate for your app.

Real-time Pitch Shifting on the iPhone

I have a children's iPhone application that I am writing and I need to be able to shift the pitch of a sound sample using Core Audio. Does anyone have any example code I could look at where this is done. There are many music and game apps in the app store that do this so I know I am not the first one. However, I cannot find any examples of it being done.
you can use dirac-2 from dsp dimension for pitch shifting on the iphone. quote: -
"DIRAC2 is available as both a commercial object library offering unlimited sample rates and phase locked multichannel support and as a free single channel, 44.1/48kHz LE version."
use the soundtouch open source project to change pitch
Here is the link : http://www.surina.net/soundtouch/
Once you add soundtouch to your project, you have to give the input sound file path, output sound file path and pitch change as the input.
Since it takes more time to process your sound its better to modify soundtouch so that when you record the voice, directly give the data for processing. It will make your application better.
I know it's too late for the person who asked but it is really a valuable link (As I found) for any one else who is looking for the solution of the same problem.
So Here we have latest DIRAC3 with it's own audio player classes which will take care of run time pitch and speed(explore for god knows what more) shifting. Run the sample and have huge round of applause for that.
Try Dirac - it's the best technology out there and it's available on Win, Linux, MacOS X and iOS. We're using it in all our products (and a couple of others do as well, search for "Capo" on the App Store). They're at version 3 now which has seen a huge increase in performance since previous versions. Hope this helps.
See: Related question
How much control over pitch do you need... could you precalculate all the different sounds?
If the answer is yes, then you can just pick the right sounds and play them.
You could also use Audio Converter Services in conjunction with AVAudioPlayer, which will allow you to resample the audio (which will effectively repitch them, though they'll change duration).
Alternatively, as the related question points out, you could use OpenAL and AL_PITCH