How to Make and Play A Procedurally Generated Chirp Sound - iphone

My goal is to create a "Chirper" class. A chirper should be able to emit a procedurally generated chirp sound. The specific idea is that the chirp must be procedurally generated, not a prerecorded sound played back.
What is the simplest way to achieve a procedurally generated chirp sound on the iPhone?

You can do it with a sine wave as you said, which you would define using the sin functions. Create a buffer as long as you want the sound in samples, such as:
// 1 second chirp
float samples[44100];
Then pick a start frequency and end frequency, which you probably want the start to be higher than the end, something like:
float startFreq = 1400;
float endFreq = 1100;
float thisFreq;
int x;
for(x = 0; x < 44100; x++)
{
float lerp = float(float(x) / 44100.0);
thisFreq = (lerp * endFreq) + ((1 - lerp) * startFreq);
samples[x] = sin(thisFreq * x);
}
Something like that, anyway.
And if you want a buzz or another sound, use different waveforms - create them to work very similarly to sin and you can use them interchangably. That way you could create saw() sqr() tri(), and you could do things like combine them to form more complex or varied sounds
========================
Edit -
If you want to play you should be able to do something along these lines using OpenAL. The important thing is to use OpenAL or a similar iOS API to play the raw buffer.
alGenBuffers (1, &buffer);
alBufferData (buffer, AL_FORMAT_MONO16, buf, size, 8000);
alGenSources (1, &source);
ALint state;
// attach buffer and play
alSourcei (source, AL_BUFFER, buffer);
alSourcePlay (source);
do
{
wait (200);
alGetSourcei (source, AL_SOURCE_STATE, &state);
}
while ((state == AL_PLAYING) && play);
alSourceStop(source);
alDeleteSources (1, &source);
delete (buf)
}

Using RemoteIO audio unit
Audio Unit Hosting Guide for iOS
You can use Nektarios's code in the render callback for remote I/O unit. Also you can even change waveforms real-time (low latency).

Related

How to generate Sine Wave on top of Triangular Wave using the DAC with DMA of STM32G4

I have a STM32G4 nucleo board. I would like to generate a summation waveform consisting of triangular wave (~1Hz) and sine wave (500Hz) using the DAC and DMA on STM32G4.
Is it possible to get the summation waveform out from one DAC channel? Can anyone help me with this? Any help is appreciated. Thanks.
I computed a lookup table for one cycle of sine wave. And I added the sine wave onto an incrementing line. Then I realized it will only generate a triangle wave with one cycle of sine wave when it is ramping up and one cycle of sine wave when it is ramping down.
#define dac_buf_len 200
HAL_DAC_Start_DMA(&hdac1, DAC_CHANNEL_2, (uint32_t *) dac, dac_buf_len,DAC_ALIGN_12B_R);
//generate sine wave
for (uint32_t i=0; i < dac_buf_len; i++)
{
float s = (float) i/(float)(dac_buf_len-1);
dac_sin[i] = sine_amplitude * sin(2*M_PI*s); //one cycle of sine wave
}
//generate triangular wave (ramp up)
for (uint32_t i=0; i<dac_buf_len/2; i++)
{
dac_triangular[i] = 0.006*i - 0.5;
}
//generate triangular wave (ramp down)
for (uint32_t i=0; i<dac_buf_len/2; i++)
{
dac_triangular[100+i] = -0.006*i + 0.1;
}
//sum two waves together
for (uint32_t i=0; i< dac_buf_len; i++)
{
dac[i] = dac_sin[i] + dac_triangular[i];
}
for me it sounds like you'd want the DAC Peripheral / the DMA Peripheral do the Math automatically do for you. IMHO this is simply not possible and the wrong approach.
The correct approach would be:
calculate the sinus wave, calculate the triangular wave, add both values (for each sample), convert it into the corresponding integer value and store it in the DMA buffer. Then the DAC will create the output voltages that correspond to a superposition of both signals you generated.
if you want to fill the DMA Buffer blockwise, do the same, but in a loop.

How to make oscillator-based kick drum sound exactly the same every time

I’m trying to create a kick drum sound that must sound exactly the same when looped at different tempi. The implementation below sounds exactly the same when repeated once every second, but it sounds to me like every other kick has a higher pitch when played every half second. It’s like there is a clipping sound or something.
var context = new AudioContext();
function playKick(when) {
var oscillator = context.createOscillator();
var gain = context.createGain();
oscillator.connect(gain);
gain.connect(context.destination);
oscillator.frequency.setValueAtTime(150, when);
gain.gain.setValueAtTime(1, when);
oscillator.frequency.exponentialRampToValueAtTime(0.001, when + 0.5);
gain.gain.exponentialRampToValueAtTime(0.001, when + 0.5);
oscillator.start(when);
oscillator.stop(when + 0.5);
}
for (var i = 0; i < 16; i++) {
playKick(i * 0.5); // Sounds fine with multiplier set to 1
}
Here’s the same code on JSFiddle: https://jsfiddle.net/1kLn26p4/3/
Not true; oscillator.start will begin the phase at 0. The problem is that you're starting the "when" parameter at zero; you should start it at context.currentTime.
for (var i = 0; i < 16; i++) {
playKick(context.current time + i * 0.5); // Sounds fine with multiplier set to 1
}
The oscillator is set to start at the same time as the change from the default frequency of 440 Hz to 150 Hz. Sometimes this results in a glitch as the transition is momentarily audible.
The glitch can be prevented by setting the frequency of the oscillator node to 150 Hz at the time of creation. So add:
oscillator.frequency.value = 150;
If you want to make the glitch more obvious out of curiosity, try:
oscillator.frequency.value = 5000;
and you should be able to hear what is happening.
Updated fiddle.
EDIT
In addition the same problem is interacting with the timing of the ramp. You can further improve the sound by ensuring that the setValueAtTime event always occurs a short time after playback starts:
oscillator.frequency.setValueAtTime(3500, when + 0.001);
Again, not perfect at 3500 Hz, but it's an improvement, and I'm not sure you'll achieve sonic perfection with Web Audio. The best you can do is try to mask these glitches until implementations improve. At actual kick drum frequencies (e.g. the 150 Hz in your original Q.), I can't tell any difference between successive sounds. Hopefully that's good enough.
Revised fiddle.

Improving Accuracy of iPhone's Accelerometer in Counting Steps

I am currently using the following code to count the number of steps a user takes in my indoor navigation application. As I am holding the phone around my chest level with the screen facing upwards, it counts the number of steps I take pretty well. But common actions like a tap on the screen or panning through the map register step counts as well. This is very frustrating as the tracking of my movement within the floor plan will become highly inaccurate. Does anyone have any idea how I can improve the accuracy of tracking in this case? Any comments will be much appreciated! To have a better idea of what I'm trying to do, you guys can check out a similar Android application at http://www.youtube.com/watch?v=wMgIa44mJXY. Thanks!
-(void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
float xx = acceleration.x;
float yy = acceleration.y;
float zz = acceleration.z;
float dot = (px * xx) + (py * yy) + (pz * zz);
float a = ABS(sqrt(px * px + py * py + pz * pz));
float b = ABS(sqrt(xx * xx + yy * yy + zz * zz));
dot /= (a * b);
if (dot <= 0.9989) {
if (!isSleeping) {
isSleeping = YES;
[self performSelector:#selector(wakeUp) withObject:nil afterDelay:0.3];
numSteps += 1;
}
}
px = xx; py = yy; pz = zz;
}
The data from the accelerometer is basically a unidimensional (time) non uniform sampling of a three dimensional vector signal. The best way to figure out how to count steps will be to write an app that records and store the samples over a certain period of time, then export the data to a mathematical application like Wolfram's Mathematica for analysis and visualization. Remember that the sampling is non uniform, you may or may not want to transform it into a uniformly sampled digital signal.
Then you can try different signal processing algorithms to see what works best.
It's possible that, once you know the basic shape of a step in accelerometer data, you can recognize them by simple convolution.

Get Hz frequency from audio stream on iPhone

What would be the best way to get Hz frequency value from audio stream(music) on iOS? What are the best and easiest frameworks provided by Apple to do that. Thanks in advance.
Here is some code I use to perform FFT in iOS using Accelerate Framework, which makes it quite fast.
//keep all internal stuff inside this struct
typedef struct FFTHelperRef {
FFTSetup fftSetup; // Accelerate opaque type that contains setup information for a given FFT transform.
COMPLEX_SPLIT complexA; // Accelerate type for complex number
Float32 *outFFTData; // Your fft output data
Float32 *invertedCheckData; // This thing is to verify correctness of output. Compare it with input.
} FFTHelperRef;
//first - initialize your FFTHelperRef with this function.
FFTHelperRef * FFTHelperCreate(long numberOfSamples) {
FFTHelperRef *helperRef = (FFTHelperRef*) malloc(sizeof(FFTHelperRef));
vDSP_Length log2n = log2f(numberOfSamples);
helperRef->fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
int nOver2 = numberOfSamples/2;
helperRef->complexA.realp = (Float32*) malloc(nOver2*sizeof(Float32) );
helperRef->complexA.imagp = (Float32*) malloc(nOver2*sizeof(Float32) );
helperRef->outFFTData = (Float32 *) malloc(nOver2*sizeof(Float32) );
memset(helperRef->outFFTData, 0, nOver2*sizeof(Float32) );
helperRef->invertedCheckData = (Float32*) malloc(numberOfSamples*sizeof(Float32) );
return helperRef;
}
//pass initialized FFTHelperRef, data and data size here. Return FFT data with numSamples/2 size.
Float32 * computeFFT(FFTHelperRef *fftHelperRef, Float32 *timeDomainData, long numSamples) {
vDSP_Length log2n = log2f(numSamples);
Float32 mFFTNormFactor = 1.0/(2*numSamples);
//Convert float array of reals samples to COMPLEX_SPLIT array A
vDSP_ctoz((COMPLEX*)timeDomainData, 2, &(fftHelperRef->complexA), 1, numSamples/2);
//Perform FFT using fftSetup and A
//Results are returned in A
vDSP_fft_zrip(fftHelperRef->fftSetup, &(fftHelperRef->complexA), 1, log2n, FFT_FORWARD);
//scale fft
vDSP_vsmul(fftHelperRef->complexA.realp, 1, &mFFTNormFactor, fftHelperRef->complexA.realp, 1, numSamples/2);
vDSP_vsmul(fftHelperRef->complexA.imagp, 1, &mFFTNormFactor, fftHelperRef->complexA.imagp, 1, numSamples/2);
vDSP_zvmags(&(fftHelperRef->complexA), 1, fftHelperRef->outFFTData, 1, numSamples/2);
//to check everything =============================
vDSP_fft_zrip(fftHelperRef->fftSetup, &(fftHelperRef->complexA), 1, log2n, FFT_INVERSE);
vDSP_ztoc( &(fftHelperRef->complexA), 1, (COMPLEX *) fftHelperRef->invertedCheckData , 2, numSamples/2);
//=================================================
return fftHelperRef->outFFTData;
}
Use it like this:
Initialize it: FFTHelperCreate(TimeDomainDataLenght);
Pass Float32 time domain data, get frequency domain data on return: Float32 *fftData = computeFFT(fftHelper, buffer, frameSize);
Now you have an array where indexes=frequencies, values=magnitude (squared magnitudes?).
According to Nyquist theorem your maximum possible frequency in that array is half of your sample rate. That is if your sample rate = 44100, maximum frequency you can encode is 22050 Hz.
So go find that Nyquist max frequency for your sample rate: const Float32 NyquistMaxFreq = SAMPLE_RATE/2.0;
Finding Hz is easy: Float32 hz = ((Float32)someIndex / (Float32)fftDataSize) * NyquistMaxFreq;
(fftDataSize = frameSize/2.0)
This works for me. If I generate specific frequency in Audacity and play it - this code detects the right one (the strongest one, you also need to find max in fftData to do this).
(there's still a little mismatch in about 1-2%. not sure why this happens. If someone can explain me why - that would be much appreciated.)
EDIT:
That mismatch happens because pieces I use to FFT are too small. Using larger chunks of time domain data (16384 frames) solves the problem.
This questions explains it:
Unable to get correct frequency value on iphone
EDIT:
Here is the example project: https://github.com/krafter/DetectingAudioFrequency
Questions like this are asked a lot here on SO. (I've answered a similar one here) so I wrote a little tutorial with code that you can use even in commercial and closed source apps. This is not necessarily the BEST way, but it's a way that many people understand. You will have to modify it based on what you mean by "Hz average value of every short music segment". Do you mean the fundamental pitch or the frequency centroid, for example.
You might want to use Apple's FFT in the accelerate framework as suggested by another answer.
Hope it helps.
http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html
Apple does not provide a framework for frequency or pitch estimation. However, the iOS Accelerate framework does include routines for FFT and autocorrelation which can be used as components of more sophisticated frequency and pitch recognition or estimation algorithms.
There is no way that is both easy and best, except possibly for a single long continuous constant frequency pure sinusoidal tone in almost zero noise, where an interpolated magnitude peak of a long windowed FFT might be suitable. For voice and music, that simple method will very often not work at all. But a search for pitch detection or estimation methods will turn up lots of research papers on more suitable algorithms.

How can I display a simple animated spectrogram to visualize audio from a MixerHostAudio object?

I'm working off of some of Apple's sample code for mixing audio (http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html) and I'd like to display an animated spectrogram (think itunes spectrogram in the top center that replaces the song title with moving bars). It would need to somehow get data from the audio stream live since the user will be mixing several loops together. I can't seem to find any tutorials online about anything to do with this.
I know I am really late for this question. But I just found some great resource to solve this question.
Solution:
Instantiate an audio unit to record samples from the microphone of the iOS device.
Perform FFT computations with the vDSP functions in Apple’s Accelerate framework.
Draw your results to the screen using a UIImage
Computing the FFT or spectrogram of an audio signal is a fundamental audio signal processing. As an iOS developer, whether you want to simply find the pitch of a sound, create a nice visualisation, or do some front end processing it’s something you’ve likely thought about if you are at all interested in audio. Apple does provide a sample application for illustrating this task (aurioTouch). The audio processing part of that App is obscured, however, imho, by extensive use of Open GL.
The goal of this project was to abstract as much of the audio dsp out of the sample app as possible and to render a visualisation using only the UIKit. The resulting application running in the iOS 6.1 simulator is shown to the left. There are three components. rscodepurple1 A simple ‘exit’ button at the top, a gain slider at the bottom (allowing the user to adjust the scaling between the magnitude spectrum and the brightness of the colour displayed), and a UIImageView displaying power spectrum data in the middle that refreshes every three seconds. Note that the frequencies run from low to the high beginning at the top of the image. So, the top of the display is actually DC while the bottom is the Nyquist rate. The image shows the results of processing some speech.
This particular Spectrogram App records audio samples at a rate of 11,025 Hz in frames that are 256 points long. That’s about 0.0232 seconds per frame. The frames are windowed using a 256 point Hanning window and overlap by 1/2 of a frame.
Let’s examine some of the relevant parts of the code that may cause confusion. If you want to try to build this project yourself you can find the source files in the archive below.
First of all look at the content of the PerformThru method. This is a callback for an audio unit. Here, it’s where we read the audio samples from a buffer into one of the arrays we have declared.
SInt8 *data_ptr = (SInt8 *)(ioData->mBuffers[0].mData);
for (i=0; i<inNumberFrames; i++) {
framea[readlas1] = data_ptr[2];
readlas1 += 1;
if (readlas1 >=33075) {
readlas1 = 0;
dispatch_async(dispatch_get_main_queue(), ^{
[THIS printframemethod];
});
}
data_ptr += 4;
}
Note that framea is a static array of length 33075. The variable readlas1 keeps track of how many samples have been read. When the counter hits 33075 (3 seconds at this sampling frequency) a call to another method printframemethod is triggered and the process restarts.
The spectrogram is calculated in printframemethod.
for (int b = 0; b < 33075; b++) {
originalReal[b]=((float)framea[b]) * (1.0 ); //+ (1.0 * hanningwindow[b]));
}
for (int mm = 0; mm < 250; mm++) {
for (int b = 0; b < 256; b++) {
tempReal[b]=((float)framea[b + (128 * mm)]) * (0.0 + 1.0 * hanningwindow[b]);
}
vDSP_ctoz((COMPLEX *) tempReal, 2, &A, 1, nOver2);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
scale = (float) 1. / 128.;
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
for (int b = 0; b < nOver2; b++) {
B.realp[b] = (32.0 * sqrtf((A.realp[b] * A.realp[b]) + (A.imagp[b] * A.imagp[b])));
}
for (int k = 0; k < 127; k++) {
Bspecgram[mm][k]=gainSlider.value * logf(B.realp[k]);
if (Bspecgram[mm][k]<0) {
Bspecgram[mm][k]=0.0;
}
}
}
Note that in this method we first cast the signed integer samples to floats and store in the array originalReal. Then the FFT of each frame is computed by calling the vDSP functions. The two-dimensional array Bspecgram contains the actual magnitude values of the Short Time Fourier Transform. Look at the code to see how these magnitude values are converted to RGB pixel data.
Things to note:
To get this to build just start a new single-view project and replace the delegate and view controller and add the aurio_helper files. You need to link the Accelerate, AudioToolbox, UIKit, Foundation, and CoreGraphics frameworks to build this. Also, you need PublicUtility. On my system, it is located at /Developer/Extras/CoreAudio/PublicUtility. Where you find it, add that directory to your header search paths.
Get the code:
The delegate, view controller, and helper files are included in this zip archive.
A Spectrogram App for iOS in purple
Apple's aurioTouch example app (on developer.apple.com) has source code for drawing an animated frequency spectrum plot from recorded audio input. You could probably group FFT bins into frequency ranges for a coarser bar graph plot.