So I have been looking around for some time now on a way to produce a variable tone on the iPhone using OpenAL, the issue being the Apple has deprecated the ALUT part of OpenAL that has the alutCreateBufferWaveform that would be perfect for this. I was wondering if anyone had any idea how to make a tone generator using OpenAL on the iPhone SDK. All I need is the ability to produce a certain frequency tone consistantly over and over again.
This is a last resort so sorry if it sounds kind of stupid.
This isn't exactly what you are looking for, but it can create a similar effect.
I used this tutorial
http://benbritten.com/2008/11/06/openal-sound-on-the-iphone/
to create an engine that could play a prerecorded sound at different levels. So even though I have to play the sound from an existing *.caf file, I can modulate the pitch and control looping so it produces any frequency, length, or volume I'm looking for.
Related
I've been searching for a while and can't come to a good conclusion.
I am trying to create an app that can "record" beats that a user makes on a 4x4 button array. Each button has a sound tied to it and after they hit record, I want to mix the audio that gets played and save it to a file so they can listen to it and play over it later.
What makes this even trickier is that there will be a metronome playing and I do not want to mix the metronome sound into the audio that is getting saved.
From what I have found, the only way to go is Audio Units for these features, but I am reluctant to since it seems a little overkill and somewhat complicated to learn. Can Audio Toolbox make this any easier?
Thanks!
In generally, using a AudioToolBox easily implements.
more information, see below sample code. it's a lot of help.
MixerHost
So I have been looking around for some time now on a way to produce a variable tone on the iPhone using OpenAL, the issue being the Apple has deprecated the ALUT part of OpenAL that has the alutCreateBufferWaveform that would be perfect for this. I was wondering if anyone had any idea how to make a tone generator using OpenAL on the iPhone SDK. All I need is the ability to produce a certain frequency tone consistantly over and over again.
This is a last resort so sorry if it sounds kind of stupid.
This isn't exactly what you are looking for, but it can create a similar effect.
I used this tutorial
http://benbritten.com/2008/11/06/openal-sound-on-the-iphone/
to create an engine that could play a prerecorded sound at different levels. So even though I have to play the sound from an existing *.caf file, I can modulate the pitch and control looping so it produces any frequency, length, or volume I'm looking for.
I am trying to write an iPhone App that should monitor the any incoming sound. I am not sure how can I get the sound recorded by iPhone's Microphone and detect its frequency. If same frequency sound repeated couple of times then I need to take some action. Could anyone please help me here. I went through the How to detect sound frequency / pitch on an iPhone? but I couldn't understood how to use them.
Any documentation or example would be really useful.
Thanks.
You'll appreciate reading this, on how to get the sound "without having to drop down to C", by using AVAudioRecorder...
Then, begin researching FFT...
Checkout this post about FFT for iPhone, which mentions various options, including the possibility of using Apple's Accelerate framework (in which you will need to drop to C) to apparently get "Apple-written FFT functions".
This is probably what you really want to read.
I have a children's iPhone application that I am writing and I need to be able to shift the pitch of a sound sample using Core Audio. Does anyone have any example code I could look at where this is done. There are many music and game apps in the app store that do this so I know I am not the first one. However, I cannot find any examples of it being done.
you can use dirac-2 from dsp dimension for pitch shifting on the iphone. quote: -
"DIRAC2 is available as both a commercial object library offering unlimited sample rates and phase locked multichannel support and as a free single channel, 44.1/48kHz LE version."
use the soundtouch open source project to change pitch
Here is the link : http://www.surina.net/soundtouch/
Once you add soundtouch to your project, you have to give the input sound file path, output sound file path and pitch change as the input.
Since it takes more time to process your sound its better to modify soundtouch so that when you record the voice, directly give the data for processing. It will make your application better.
I know it's too late for the person who asked but it is really a valuable link (As I found) for any one else who is looking for the solution of the same problem.
So Here we have latest DIRAC3 with it's own audio player classes which will take care of run time pitch and speed(explore for god knows what more) shifting. Run the sample and have huge round of applause for that.
Try Dirac - it's the best technology out there and it's available on Win, Linux, MacOS X and iOS. We're using it in all our products (and a couple of others do as well, search for "Capo" on the App Store). They're at version 3 now which has seen a huge increase in performance since previous versions. Hope this helps.
See: Related question
How much control over pitch do you need... could you precalculate all the different sounds?
If the answer is yes, then you can just pick the right sounds and play them.
You could also use Audio Converter Services in conjunction with AVAudioPlayer, which will allow you to resample the audio (which will effectively repitch them, though they'll change duration).
Alternatively, as the related question points out, you could use OpenAL and AL_PITCH
I'm looking to create an app that emulates a physical instrument. I've got audio samples but I want to be able to increase the pitch/frequency dynamically so I don't have to load from too many files.
Any idea which audio API will be able to do this? I reckon either OpenAL or Audio Queue Services but am not sure which is suitable. Any links to guides/sample code is also much appreciated.
Thanks in advance.
I went down this road in 2009, trying Audio Toolkit, Audio Queue Services, openAL, and finally settling on the RemoteIO AudioUnit.
Audio Toolbox is fine for basic triggered sound effects, but it wasn't able to change frequencies or loop samples.
Audio Queue Services can loop samples, but the only way I could find to adjust the playback frequency of a sample was to re-read the data from the file -- very painful. Plus, the framework is tremendously cumbersome - I'd only use it if I was trying to stream something off the Internet.
OpenAL was a godsend - was up and running with it in under an hour, after getting my hands on the no-longer-available-from-Apple "CrashLanding" iPhone sample app. I found OpenAL to be ideally suited to games or even a musical instrument -- samples could be pre-loaded, adjusting the frequency was easy, and looping was no problem. The deal-breaker for me was that starting and stopping a looped sample would result in a nasty "pop" almost every time. Also the builtin 3d positional audio mixer was a bit too CPU-intensive for my liking.
If your instrument does not use looped samples, I'd suggest trying the OpenAL route first - the learning curve is much less intimidating. Try to track down "SoundEngine.h", "CrashLanding" or "TouchFighter", or check out the following link:
http://benbritten.com/blog/2008/11/06/openal-sound-on-the-iphone/
Since looped samples was a requirement for me, I finally settled on AudioUnits (which, on the iPhone, is referred to as "RemoteIO" if you want to do input or output). It was tremendously difficult to implement - very similar to Audio Queue Services, in that the core of your implementation will be inside a "buffer callback", being called several times per second to fill a buffer of outbound audio with raw SInt16 values.
Ultimately, I got my instrument working beautifully with multi-note polyphony, looped samples, no popping, and minimal latency.
Unfortunately, RemoteIO is not well documented. Michael Tyson was one of the first in the field to write about RemoteIO at length, and his posts (and the comments) were very useful to me:
http://michael.tyson.id.au/2008/11/04/using-remoteio-audio-unit/
Good luck!
Edited years later: I've open-sourced the RemoteIO/AudioUnits code I alluded to above: https://github.com/glenn-barnett/hexaphone/blob/master/Classes/Instrument.m - apologies for the mess, I hope to get some time to clean up the code and comments.
Try creating an Audio Unit. I'm doing something similar an AU worked well for me.
Initially I used an audio queue as it was simpler (higher level?) and
synchronous, however it was lacking in responsiveness, so I dumped it for
the Audio Unit.
It sounds, a bit, like you're creating essentially the wavetable synthesis method of playing MIDI files. You might be able to find a MIDI synthesizer for the iPhone that you can use, and then use your audio samples to build a wavetable set. Anytime you'd want to play tones, you would simply send the MIDI event into the iPhone MIDI synth with your loaded wavetable set.
Another option now is AUSampler.
http://developer.apple.com/library/mac/#technotes/tn2283/_index.html