Changing the pitch of a sound in realtime (Swift) - swift

I have been trying, for the past weeks, to find a simple way to control audio frequency (I.E. change the pitch of an audio file) from within swift in realtime.
I have tried with AVAudioPlayer.rate but that only changes the speed.
I even tried connecting an AVAudioUnitTimePitch to an audio engine with no success.
AVAudioUnitTimePitch just gives me an error, and rate changes the playback speed which is not what I need.
What I would like to do is, make a sound higher or lower pitch, say from -1.0 to 2.0 (audio source.duration/2 so it would play twice as fast.
Do you guys know of any way to do this even if I have to use external libraries or classes?
Thank you, I am stumped as to how to proceed.

If you use the Audio Unit API, the iOS built-in kAudioUnitSubType_NewTimePitch Audio Unit component can be used to change pitch while playing audio at the same speed. e.g. the kNewTimePitchParam_Rate and kNewTimePitchParam_Pitch parameters are independent.
The Audio Unit framework has a (currently non-deprecated) C API, but one can call C API functions from Swift code.

Related

How Do I Get Reliable Timing for my Audio App?

I have an audio app in which all of the sound generating work is accomplished by pure data (using libpd).
I've coded a special sequencer in swift which controls the start/stop playback of multiple sequences, played by the synth engines in pure data.
Until now, I've completely avoided using Core Audio or AVFoundation for any aspect of my app, because I know nothing about them, and they both seem to require C or Objective C coding, which I know nearly nothing about.
However, I've been told from a previous q&a on here, that I need to use Core Audio or AVFoundation to get accurate timing. Without it, I've tried everything else, and the timing is totally messed up (laggy, jittery).
All of the tutorials and books on Core Audio seem overwhelmingly broad and deep to me. If all I need from one of these frameworks is accurate timing for my sequencer, how do you suggest I achieve this as someone who is a total novice to Core Audio and Objective-C, but otherwise has a 95% finished audio app?
If your sequencer is Swift code that depends on being called just-in-time to push audio, it won't work with good timing accuracy. e.g. you can't get the timing you need.
Core Audio uses a real-time pull-model (which excludes Swift code of any interesting complexity). AVFoundation likely requires you to create your audio ahead of time, and schedule buffers. An iOS app needs to be designed nearly from the ground up for one of these two solutions.
Added: If your existing code can generate audio samples a bit ahead of time, enough to statistically cover using a jittery OS timer, you can schedule this pre-generated output to be played a few milliseconds later (e.g. when pulled at the correct sample time).
AudioKit is an open source audio framework that provides Swift access to Core Audio services. It includes a Core Audio based sequencer, and there is plenty of sample code available in the form of Swift Playgrounds.
The AudioKit AKSequencer class has the transport controls you need. You can add MIDI events to your sequencer instance programmatically, or read them from a file. You could then connect your sequencer to an AKCallbackInstrument which can execute code upon receiving MIDI noteOn and noteOff commands, which might be one way to trigger your generated audio.

iPhone: Advanced Microphone Recorder APIs

I am building an App that allows our customers to record, save, and play recorded sound as a basic functionality. This should be straight using AVFoundation Framework. What I also allow users are
Fast Forward and Reverse functionality.
User should also able to manipulate the sound. I mean they allow to insert the sound in between their recorded sound later.
Could anyone please tell me how could I achieve these? Is there any good open-source library for this?
The AVAudioPlayer supports manipulating the playback speed via the rate and enableRate properties, but it only allows forward playing.
The MPMoviePlayerController conforms to the MPMediaPlayback protocol which allows you to specify any rate (even reverse). Though this method will result in choppy audio for some rates.
As far as merging audio files, I think your best bet is to convert your samples to linear PCM. Then you can insert additional samples anywhere in your stream.

iPhone sound manipulation [duplicate]

So I have been looking around for some time now on a way to produce a variable tone on the iPhone using OpenAL, the issue being the Apple has deprecated the ALUT part of OpenAL that has the alutCreateBufferWaveform that would be perfect for this. I was wondering if anyone had any idea how to make a tone generator using OpenAL on the iPhone SDK. All I need is the ability to produce a certain frequency tone consistantly over and over again.
This is a last resort so sorry if it sounds kind of stupid.
This isn't exactly what you are looking for, but it can create a similar effect.
I used this tutorial
http://benbritten.com/2008/11/06/openal-sound-on-the-iphone/
to create an engine that could play a prerecorded sound at different levels. So even though I have to play the sound from an existing *.caf file, I can modulate the pitch and control looping so it produces any frequency, length, or volume I'm looking for.

How to change a recorded voice to a man's voice in Core Audio (Audio Unit/ Remote IO) for iPhone

I am new to Core Audio and really lost, I am trying to record an audio and then apply voice modulation to that recording and play it back. I have looked at the example Speak Here which uses Audio Queue for audio recording. I am stuck at the part of how to change the audio samples. I understand that it can be done using Audio Unit in the call back function to change the audio samples, but I have no idea what to apply to those samples to change them (will changing pitch help ?).
If you could direct me to some source code or tutorial or any site that explains voice modulation for objective C will really really help me. Thank you all in advance.
What you are trying to do here is not that simple. Basically, you would have to implement a vocoder ("voice-coder") to change a voice. The Wikipedia links should help you there.
Then, you still have manipulate those samples in CoreAudio. You can do this using Audio Queue Services but that not exactly an easy-to-use API. It might actually be less trouble to use one of the simpler CoreAudio APIs and wrap your vocoder in an Audio Unit.
Do you have some experience with audio processing? Implementing a vocoder without some prior knowledge about audio processing in general is a tough task.
First, to actually answer your question: When you called the AudioQueueNewInput() function, you pass it the name of a routine that will be called every time data is available to you. You probably called it MyInputBufferHandler() or something. It's third argument is an AudioQueueBufferRef which hold the incoming data.
Be aware that this is not as simple as looking at each sample (amplitude) and lowering or raising it. You receive samples in the temporal (time) domain as amplitudes. There is no pitch or frequency information available. What you need to do is move the incoming samples (waveform) into the frequency domain, wherein each "point" in that space is a frequency and it's accompanying power and phase. You can do that with an FFT (fast Fourier transform) but the mathematics are somewhat sophisticated. Apple does provide FFT routines in the Acceleration framework, but be aware that you are wading into very deep water here.

OpenAL tone generation on iPhone

So I have been looking around for some time now on a way to produce a variable tone on the iPhone using OpenAL, the issue being the Apple has deprecated the ALUT part of OpenAL that has the alutCreateBufferWaveform that would be perfect for this. I was wondering if anyone had any idea how to make a tone generator using OpenAL on the iPhone SDK. All I need is the ability to produce a certain frequency tone consistantly over and over again.
This is a last resort so sorry if it sounds kind of stupid.
This isn't exactly what you are looking for, but it can create a similar effect.
I used this tutorial
http://benbritten.com/2008/11/06/openal-sound-on-the-iphone/
to create an engine that could play a prerecorded sound at different levels. So even though I have to play the sound from an existing *.caf file, I can modulate the pitch and control looping so it produces any frequency, length, or volume I'm looking for.