Help with IIR Comb Filter - iphone

Reverb.m
#define D 1000
OSStatus MusicPlayerCallback(
void* inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames
AudioBufferList * ioData){
MusicPlaybackState *musicPlaybackState = (MusicPlaybackState*) inRefCon;
//Sample Rate 44.1
float a0,a1;
double y0, sampleinp;
//Delay Gain
a0 = 1;
a1 = 0.5;
for (int i = 0; i< ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
SIn16 *outSampleBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++) {
//Delay Left Channel
sampleinp = *musicPlaybackState->samplePtr++;
/* IIR equation of Comb Filter
y[n] = (a*x[n])+ (b*x[n-D])
*/
y0 = (a0*sampleinp) + (a1*sampleinp-D);
outSample[j] = fmax(fmin(y0, 32767.0), -32768.0);
j++;
//Delay Right Channel
sampleinp = *musicPlaybackState->samplePtr++;
y0 = (a0*sampleinp) + (a1*sampleinp-D);
outSample[j] = fmax(fmin(y0, 32767.0), -32768.0);
}
}
}
Ok, I got a lot of info but I'm having trouble implementing it. Can someone help, it's probably something really easy i'm forgeting. It's just playing back as normal with a little boost but no delays.

Your treatment of the x0[] variables doesn't look right -- the way you have it, the left and right channels will be intermingled. You assign to x0[j] for the left channel, then
overwrite x0[j] with the right channel data. So the delayed signal x0[j-D] will
always correspond to the right channel, with the delayed left channel data being lost.
You didn't say what your sample rate is, but for a typical audio application, a
three-sample delay might not have much of an audible effect. At 44.1 ksamp/sec,
with a 3-sample delay the peaks and troughs of the filter response will be at
multiples of 14,700 Hz. All you'll get is a single peak in the audio frequency
range, in a part of the spectrum where there's hardly any power (assuming the
signal is speech or music).

Related

Frequency Adjusting with STM32 DAC

I used STM32F407VG to create a 30 khz sine wave. Timer settings are; Prescaler = 2-1, ARR = 1, also the clock is 84 Mhz(the clock which runs DAC).
I wrote a function called generate_sin();
#define SINE_ARY_SIZE (360)
const int MAX_SINE_DEGERI = 4095; // max_sine_value
const double BASLANGIC_NOKTASI = 2047.5; //starting point
uint32_t sine_ary[SINE_ARY_SIZE];
void generate_sine(){
for (int i = 0; i < SINE_ARY_SIZE; i++){
double deger = (sin(i*M_PI*360/180/SINE_ARY_SIZE) * BASLANGIC_NOKTASI) + BASLANGIC_NOKTASI; //double value
sine_ary[i] = (uint32_t)deger; // value
}
This is the function which creates sine wave. I used HAL DMA to send DAC output variables.
HAL_TIM_Base_Start(&htim2);
generate_sine();
HAL_DAC_Start_DMA(&hdac, DAC_CHANNEL_1, sine_ary, SINE_ARY_SIZE, DAC_ALIGN_12B_R);
These are the codes i used to do what i want. But im having a trouble to change frequency without changing prescaler or ARR.
So here is my question. Can i change frequency without changing timer settings ? For example i want to use buttons and whenever i push button i want my frequency to change.
The generate_sine function will give you one period of a sine wave which has SINE_ARY_SIZE of samples.
To increase the frequency you need to make the period shorter (for 2x frequency, you would have half the number of samples per period). So you should calculate the array for smaller SINE_ARY_SIZE (which will fill just part of the original buffer with a shorter sine wave) and also put this smaller value in the HAL_DAC_Start_DMA function.
Decreasing the frequency will require making the array longer.
You should declare the sine_ary with a maximum length that you will need (for lowest frequency). Make sure it fits in RAM.
#define MAXIMUM_ARRAY_LENGTH 360
uint32_t usedArrayLength = 180;
const double amplitude = 2047.5;
uint32_t sine_ary[MAXIMUM_ARRAY_LENGTH];
void generate_sine(){
for (int i = 0; i < usedArrayLength; i++){
double value = (sin(i*M_PI*2/usedArrayLength) * amplitude) + amplitude;
sine_ary[i] = (uint32_t)value; // value
}
This will have two times higher frequency than the original code, because it only has 180 samples per period, compared to 360.
Start it using
HAL_DAC_Start_DMA(&hdac, DAC_CHANNEL_1, sine_ary, usedArrayLength, DAC_ALIGN_12B_R);
To change the frequency, stop DAC, change the value of usedArrayLength (smaller value means higher frequency, must be less or equal to MAXIMUM_ARRAY_LENGTH). Then call the generate_sine function and start the DAC again by the same function (that now uses new usedArrayLength).
Frequency will be: Clock/prescaler/ARR/usedArrayLength
Also, you should use uint16_t for the array (values are from 0 to 4095, the DAC is 12bit I suppose) and DMA should be set to Half-word (2 bytes per value).

iOS Tone Generator with variable Oscillation Patterns

I have a Tone Generator Application that generates a tone based on Slider Value for frequency. This part of the application works fine. I'm redering tone using
#import <AudioToolbox/AudioToolbox.h>
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// Fixed amplitude is good enough for our purposes
const double amplitude = 0.25;
// Get the tone parameters out of the view controller
ToneGeneratorViewController *viewController =
(ToneGeneratorViewController *)inRefCon;
double theta = viewController->theta;
double theta_increment = 2.0 * M_PI * viewController->frequency / viewController- >sampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
// Store the theta back in the view controller
viewController->theta = theta;
return noErr;
}
- (void)createToneUnit
{
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, #"Error creating unit: %ld", err);
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = self;
err = AudioUnitSetProperty(toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
NSAssert1(err == noErr, #"Error setting callback: %ld", err);
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, #"Error setting stream format: %ld", err);
}
Now I need to modify the patterns in the application like Dog Whistler Application. Can anyone tell me what things do I need do to modify the wave patterns following this source code?
Thanks in advance
You would probably need different RenderTone implementations for each specific pattern. The implementation in your code produces a sampled pure sinusoidal wave with no modulation. There are various patterns you could generate, it depends on your needs what will you implement.
For example, generating shorter or longer beeps would require that you generate 'silence' (write 0-s to the buffer) in your 'for' loop for the sinusoidal for a certain number of frames within the loop and then generate the sinusiodal samples again and then silence again... (this is like chopping the signal)
You could also make an amplitude modulation (tremolo effect) by scaling the sample values with a factor computed with another sine signal (with much lower frequency).
Another example would be to produce a 'police siren' sound by modulating the frequency of the generated sample (vibrato effect), essentially the value of your variable theta_increment, also according to a low frequency signal. Or, simply using two different values for it alternating as with the 'beep' effect above.
Hope, this helps.

Perform autocorrelation with vDSP_conv from Apple Accelerate Framework

I need to perform the autocorrelation of an array (vector) but I am having trouble finding the correct way to do so. I believe that I need the method "vDSP_conv" from the Accelerate Framework, but I can't follow how to successfully set it up. The thing throwing me off the most is the need for 2 inputs. Perhaps I have the wrong function, but I couldn't find one that operated on a single vector.
The documentation can be found here
Copied from the site
vDSP_conv
Performs either correlation or convolution on two vectors; single
precision.
void vDSP_conv ( const float __vDSP_signal[], vDSP_Stride
__vDSP_signalStride, const float __vDSP_filter[], vDSP_Stride __vDSP_strideFilter, float __vDSP_result[], vDSP_Stride __vDSP_strideResult, vDSP_Length __vDSP_lenResult, vDSP_Length __vDSP_lenFilter );
Parameters
__vDSP_signal
Input vector A. The length of this vector must be at least __vDSP_lenResult + __vDSP_lenFilter - 1.
__vDSP_signalStride
The stride through __vDSP_signal.
__vDSP_filter
Input vector B.
__vDSP_strideFilter
The stride through __vDSP_filter.
__vDSP_result
Output vector C.
__vDSP_strideResult
The stride through __vDSP_result.
__vDSP_lenResult
The length of __vDSP_result.
__vDSP_lenFilter
The length of __vDSP_filter.
For an example, just assume you have an array of float x = [1.0, 2.0, 3.0, 4.0, 5.0]. How would I take the autocorrelation of that?
The output should be something similar to float y = [5.0, 14.0, 26.0, 40.0, 55.0, 40.0, 26.0, 14.0, 5.0] //generated using Matlab's xcorr(x) function
performing autocorrelation simply means you take the cross-correlation of one vector with itself. There is nothing fancy about it.
so in your case, do:
vDSP_conv(x, 1, x, 1, result, 1, 2*len_X-1, len_X);
check a sample code for more details: (which does a convolution)
http://disanji.net/iOS_Doc/#documentation/Performance/Conceptual/vDSP_Programming_Guide/SampleCode/SampleCode.html
EDIT: This borders on ridiculous, but you need to offset the x value by a specific number of zeros, which is just crazy.
the following is a working code, just set filter to the value of x you desire, and it will put the rest in the correct position:
float *signal, *filter, *result;
int32_t signalStride, filterStride, resultStride;
uint32_t lenSignal, filterLength, resultLength;
uint32_t i;
filterLength = 5;
resultLength = filterLength*2 -1;
lenSignal = ((filterLength + 3) & 0xFFFFFFFC) + resultLength;
signalStride = filterStride = resultStride = 1;
printf("\nConvolution ( resultLength = %d, "
"filterLength = %d )\n\n", resultLength, filterLength);
/* Allocate memory for the input operands and check its availability. */
signal = (float *) malloc(lenSignal * sizeof(float));
filter = (float *) malloc(filterLength * sizeof(float));
result = (float *) malloc(resultLength * sizeof(float));
for (i = 0; i < filterLength; i++)
filter[i] = (float)(i+1);
for (i = 0; i < resultLength; i++)
if (i >=resultLength- filterLength)
signal[i] = filter[i - filterLength+1];
/* Correlation. */
vDSP_conv(signal, signalStride, filter, filterStride,
result, resultStride, resultLength, filterLength);
printf("signal: ");
for (i = 0; i < lenSignal; i++)
printf("%2.1f ", signal[i]);
printf("\n filter: ");
for (i = 0; i < filterLength; i++)
printf("%2.1f ", filter[i]);
printf("\n result: ");
for (i = 0; i < resultLength; i++)
printf("%2.1f ", result[i]);
/* Free allocated memory. */
free(signal);
free(filter);
free(result);

Passing AudioQueueBufferRef data to FFT function!

I am trying to compute the frequency of a given sound process through the microphone on the iphone.
I've read all the post about FFT (including all apple code examples e.g aurioTouch,SpeakHere), but not solution to this problem.
I'm using AudioQueue, but how do I to pass the raw data "AudioQueueBufferRef" from the AudioQueue callback function (MyInputBufferHandler) inBuffer->mAudioData . To the Actual FFT "DSPSplitComplex" datatype, so I can compute it. All this using the Accelerate framework.
// AudioQueue callback function, called when an input buffers has been filled.
void AQRecorder::MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
for(int i=0; i<inNumPackets; i++) {
printf("%d ",((int*)inBuffer->mAudioData)[i]);
}
}
The FFT function.
RealFFTUsageAndTiming(){
COMPLEX_SPLIT A; //DSPSplitComplex datatype
FFTSetup setupReal;
uint32_t log2n;
uint32_t n, nOver2;
int32_t stride;
uint32_t i;
float *originalReal, *obtainedReal;
float scale;
/* Set the size of FFT. */
log2n = N;
n = 1 << log2n;
stride = 1;
nOver2 = n / 2;
/* Allocate memory for the input operands and check its availability,
* use the vector version to get 16-byte alignment. */
A.realp = (float *) malloc(nOver2 * sizeof(float));
A.imagp = (float *) malloc(nOver2 * sizeof(float));
originalReal = (float *) malloc(n * sizeof(float));
obtainedReal = (float *) malloc(n * sizeof(float));
//How do I pass the data from AudioQueue callback to function?
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_INVERSE);
}
I haven't find anywhere on how to do this. Please help!
You have to know the C data type of the data in the audio buffer and the data types that the FFT supports. If they are not the same (commonly 16-bit signed int versus short float), then you will have to convert while unpacking and copying the arrays of PCM data (in a for loop). Given real data, you can zero out the imaginary array of the input to the FFT.
Also, the length of the Audio Queue buffer may not be the same as the FFT length, so you may have to save the data from the Audio Queue callback to another queue internal to your app, and have another worker thread pass that data to your analysis/FFT routines as the queue fills.
Amplitude values are:
for(i=0;i<nover2;i++) {
print log10(A.realp[i])
}
Print it after using vdsp_fft_zrip......

How do I set up a buffer when doing an FFT using the Accelerate framework?

I'm using the Accelerate framework to perform a Fast Fourier Transform (FFT), and am trying to find a way to create a buffer for use with it that has a length of 1024. I have access to the average peak and peak of a signal on which I want to do the FFT.
Can somebody help me or give me some hints to do this?
Apple has some examples of how to set up FFTs in their vDSP Programming Guide. You should also check out the vDSP Examples sample application. While for the Mac, this code should translate directly across to iOS as well.
I recently needed to do a simple FFT of an 64 integer input waveform, for which I used the following code:
static FFTSetupD fft_weights;
static DSPDoubleSplitComplex input;
static double *magnitudes;
+ (void)initialize
{
/* Setup weights (twiddle factors) */
fft_weights = vDSP_create_fftsetupD(6, kFFTRadix2);
/* Allocate memory to store split-complex input and output data */
input.realp = (double *)malloc(64 * sizeof(double));
input.imagp = (double *)malloc(64 * sizeof(double));
magnitudes = (double *)malloc(64 * sizeof(double));
}
- (CGFloat)performAcceleratedFastFourierTransformAndReturnMaximumAmplitudeForArray:(NSUInteger *)waveformArray;
{
for (NSUInteger currentInputSampleIndex = 0; currentInputSampleIndex < 64; currentInputSampleIndex++)
{
input.realp[currentInputSampleIndex] = (double)waveformArray[currentInputSampleIndex];
input.imagp[currentInputSampleIndex] = 0.0f;
}
/* 1D in-place complex FFT */
vDSP_fft_zipD(fft_weights, &input, 1, 6, FFT_FORWARD);
input.realp[0] = 0.0;
input.imagp[0] = 0.0;
// Get magnitudes
vDSP_zvmagsD(&input, 1, magnitudes, 1, 64);
// Extract the maximum value and its index
double fftMax = 0.0;
vDSP_maxmgvD(magnitudes, 1, &fftMax, 64);
return sqrt(fftMax);
}
As you can see, I only used the real values in this FFT to set up the input buffers, performed the FFT, and then read out the magnitudes.