How to generate Sine Wave on top of Triangular Wave using the DAC with DMA of STM32G4 - stm32

I have a STM32G4 nucleo board. I would like to generate a summation waveform consisting of triangular wave (~1Hz) and sine wave (500Hz) using the DAC and DMA on STM32G4.
Is it possible to get the summation waveform out from one DAC channel? Can anyone help me with this? Any help is appreciated. Thanks.
I computed a lookup table for one cycle of sine wave. And I added the sine wave onto an incrementing line. Then I realized it will only generate a triangle wave with one cycle of sine wave when it is ramping up and one cycle of sine wave when it is ramping down.
#define dac_buf_len 200
HAL_DAC_Start_DMA(&hdac1, DAC_CHANNEL_2, (uint32_t *) dac, dac_buf_len,DAC_ALIGN_12B_R);
//generate sine wave
for (uint32_t i=0; i < dac_buf_len; i++)
{
float s = (float) i/(float)(dac_buf_len-1);
dac_sin[i] = sine_amplitude * sin(2*M_PI*s); //one cycle of sine wave
}
//generate triangular wave (ramp up)
for (uint32_t i=0; i<dac_buf_len/2; i++)
{
dac_triangular[i] = 0.006*i - 0.5;
}
//generate triangular wave (ramp down)
for (uint32_t i=0; i<dac_buf_len/2; i++)
{
dac_triangular[100+i] = -0.006*i + 0.1;
}
//sum two waves together
for (uint32_t i=0; i< dac_buf_len; i++)
{
dac[i] = dac_sin[i] + dac_triangular[i];
}

for me it sounds like you'd want the DAC Peripheral / the DMA Peripheral do the Math automatically do for you. IMHO this is simply not possible and the wrong approach.
The correct approach would be:
calculate the sinus wave, calculate the triangular wave, add both values (for each sample), convert it into the corresponding integer value and store it in the DMA buffer. Then the DAC will create the output voltages that correspond to a superposition of both signals you generated.
if you want to fill the DMA Buffer blockwise, do the same, but in a loop.

Related

Frequency Adjusting with STM32 DAC

I used STM32F407VG to create a 30 khz sine wave. Timer settings are; Prescaler = 2-1, ARR = 1, also the clock is 84 Mhz(the clock which runs DAC).
I wrote a function called generate_sin();
#define SINE_ARY_SIZE (360)
const int MAX_SINE_DEGERI = 4095; // max_sine_value
const double BASLANGIC_NOKTASI = 2047.5; //starting point
uint32_t sine_ary[SINE_ARY_SIZE];
void generate_sine(){
for (int i = 0; i < SINE_ARY_SIZE; i++){
double deger = (sin(i*M_PI*360/180/SINE_ARY_SIZE) * BASLANGIC_NOKTASI) + BASLANGIC_NOKTASI; //double value
sine_ary[i] = (uint32_t)deger; // value
}
This is the function which creates sine wave. I used HAL DMA to send DAC output variables.
HAL_TIM_Base_Start(&htim2);
generate_sine();
HAL_DAC_Start_DMA(&hdac, DAC_CHANNEL_1, sine_ary, SINE_ARY_SIZE, DAC_ALIGN_12B_R);
These are the codes i used to do what i want. But im having a trouble to change frequency without changing prescaler or ARR.
So here is my question. Can i change frequency without changing timer settings ? For example i want to use buttons and whenever i push button i want my frequency to change.
The generate_sine function will give you one period of a sine wave which has SINE_ARY_SIZE of samples.
To increase the frequency you need to make the period shorter (for 2x frequency, you would have half the number of samples per period). So you should calculate the array for smaller SINE_ARY_SIZE (which will fill just part of the original buffer with a shorter sine wave) and also put this smaller value in the HAL_DAC_Start_DMA function.
Decreasing the frequency will require making the array longer.
You should declare the sine_ary with a maximum length that you will need (for lowest frequency). Make sure it fits in RAM.
#define MAXIMUM_ARRAY_LENGTH 360
uint32_t usedArrayLength = 180;
const double amplitude = 2047.5;
uint32_t sine_ary[MAXIMUM_ARRAY_LENGTH];
void generate_sine(){
for (int i = 0; i < usedArrayLength; i++){
double value = (sin(i*M_PI*2/usedArrayLength) * amplitude) + amplitude;
sine_ary[i] = (uint32_t)value; // value
}
This will have two times higher frequency than the original code, because it only has 180 samples per period, compared to 360.
Start it using
HAL_DAC_Start_DMA(&hdac, DAC_CHANNEL_1, sine_ary, usedArrayLength, DAC_ALIGN_12B_R);
To change the frequency, stop DAC, change the value of usedArrayLength (smaller value means higher frequency, must be less or equal to MAXIMUM_ARRAY_LENGTH). Then call the generate_sine function and start the DAC again by the same function (that now uses new usedArrayLength).
Frequency will be: Clock/prescaler/ARR/usedArrayLength
Also, you should use uint16_t for the array (values are from 0 to 4095, the DAC is 12bit I suppose) and DMA should be set to Half-word (2 bytes per value).

How to Make and Play A Procedurally Generated Chirp Sound

My goal is to create a "Chirper" class. A chirper should be able to emit a procedurally generated chirp sound. The specific idea is that the chirp must be procedurally generated, not a prerecorded sound played back.
What is the simplest way to achieve a procedurally generated chirp sound on the iPhone?
You can do it with a sine wave as you said, which you would define using the sin functions. Create a buffer as long as you want the sound in samples, such as:
// 1 second chirp
float samples[44100];
Then pick a start frequency and end frequency, which you probably want the start to be higher than the end, something like:
float startFreq = 1400;
float endFreq = 1100;
float thisFreq;
int x;
for(x = 0; x < 44100; x++)
{
float lerp = float(float(x) / 44100.0);
thisFreq = (lerp * endFreq) + ((1 - lerp) * startFreq);
samples[x] = sin(thisFreq * x);
}
Something like that, anyway.
And if you want a buzz or another sound, use different waveforms - create them to work very similarly to sin and you can use them interchangably. That way you could create saw() sqr() tri(), and you could do things like combine them to form more complex or varied sounds
========================
Edit -
If you want to play you should be able to do something along these lines using OpenAL. The important thing is to use OpenAL or a similar iOS API to play the raw buffer.
alGenBuffers (1, &buffer);
alBufferData (buffer, AL_FORMAT_MONO16, buf, size, 8000);
alGenSources (1, &source);
ALint state;
// attach buffer and play
alSourcei (source, AL_BUFFER, buffer);
alSourcePlay (source);
do
{
wait (200);
alGetSourcei (source, AL_SOURCE_STATE, &state);
}
while ((state == AL_PLAYING) && play);
alSourceStop(source);
alDeleteSources (1, &source);
delete (buf)
}
Using RemoteIO audio unit
Audio Unit Hosting Guide for iOS
You can use Nektarios's code in the render callback for remote I/O unit. Also you can even change waveforms real-time (low latency).

Help with IIR Comb Filter

Reverb.m
#define D 1000
OSStatus MusicPlayerCallback(
void* inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames
AudioBufferList * ioData){
MusicPlaybackState *musicPlaybackState = (MusicPlaybackState*) inRefCon;
//Sample Rate 44.1
float a0,a1;
double y0, sampleinp;
//Delay Gain
a0 = 1;
a1 = 0.5;
for (int i = 0; i< ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
SIn16 *outSampleBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++) {
//Delay Left Channel
sampleinp = *musicPlaybackState->samplePtr++;
/* IIR equation of Comb Filter
y[n] = (a*x[n])+ (b*x[n-D])
*/
y0 = (a0*sampleinp) + (a1*sampleinp-D);
outSample[j] = fmax(fmin(y0, 32767.0), -32768.0);
j++;
//Delay Right Channel
sampleinp = *musicPlaybackState->samplePtr++;
y0 = (a0*sampleinp) + (a1*sampleinp-D);
outSample[j] = fmax(fmin(y0, 32767.0), -32768.0);
}
}
}
Ok, I got a lot of info but I'm having trouble implementing it. Can someone help, it's probably something really easy i'm forgeting. It's just playing back as normal with a little boost but no delays.
Your treatment of the x0[] variables doesn't look right -- the way you have it, the left and right channels will be intermingled. You assign to x0[j] for the left channel, then
overwrite x0[j] with the right channel data. So the delayed signal x0[j-D] will
always correspond to the right channel, with the delayed left channel data being lost.
You didn't say what your sample rate is, but for a typical audio application, a
three-sample delay might not have much of an audible effect. At 44.1 ksamp/sec,
with a 3-sample delay the peaks and troughs of the filter response will be at
multiples of 14,700 Hz. All you'll get is a single peak in the audio frequency
range, in a part of the spectrum where there's hardly any power (assuming the
signal is speech or music).

How do I set up a buffer when doing an FFT using the Accelerate framework?

I'm using the Accelerate framework to perform a Fast Fourier Transform (FFT), and am trying to find a way to create a buffer for use with it that has a length of 1024. I have access to the average peak and peak of a signal on which I want to do the FFT.
Can somebody help me or give me some hints to do this?
Apple has some examples of how to set up FFTs in their vDSP Programming Guide. You should also check out the vDSP Examples sample application. While for the Mac, this code should translate directly across to iOS as well.
I recently needed to do a simple FFT of an 64 integer input waveform, for which I used the following code:
static FFTSetupD fft_weights;
static DSPDoubleSplitComplex input;
static double *magnitudes;
+ (void)initialize
{
/* Setup weights (twiddle factors) */
fft_weights = vDSP_create_fftsetupD(6, kFFTRadix2);
/* Allocate memory to store split-complex input and output data */
input.realp = (double *)malloc(64 * sizeof(double));
input.imagp = (double *)malloc(64 * sizeof(double));
magnitudes = (double *)malloc(64 * sizeof(double));
}
- (CGFloat)performAcceleratedFastFourierTransformAndReturnMaximumAmplitudeForArray:(NSUInteger *)waveformArray;
{
for (NSUInteger currentInputSampleIndex = 0; currentInputSampleIndex < 64; currentInputSampleIndex++)
{
input.realp[currentInputSampleIndex] = (double)waveformArray[currentInputSampleIndex];
input.imagp[currentInputSampleIndex] = 0.0f;
}
/* 1D in-place complex FFT */
vDSP_fft_zipD(fft_weights, &input, 1, 6, FFT_FORWARD);
input.realp[0] = 0.0;
input.imagp[0] = 0.0;
// Get magnitudes
vDSP_zvmagsD(&input, 1, magnitudes, 1, 64);
// Extract the maximum value and its index
double fftMax = 0.0;
vDSP_maxmgvD(magnitudes, 1, &fftMax, 64);
return sqrt(fftMax);
}
As you can see, I only used the real values in this FFT to set up the input buffers, performed the FFT, and then read out the magnitudes.

How can I display a simple animated spectrogram to visualize audio from a MixerHostAudio object?

I'm working off of some of Apple's sample code for mixing audio (http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html) and I'd like to display an animated spectrogram (think itunes spectrogram in the top center that replaces the song title with moving bars). It would need to somehow get data from the audio stream live since the user will be mixing several loops together. I can't seem to find any tutorials online about anything to do with this.
I know I am really late for this question. But I just found some great resource to solve this question.
Solution:
Instantiate an audio unit to record samples from the microphone of the iOS device.
Perform FFT computations with the vDSP functions in Apple’s Accelerate framework.
Draw your results to the screen using a UIImage
Computing the FFT or spectrogram of an audio signal is a fundamental audio signal processing. As an iOS developer, whether you want to simply find the pitch of a sound, create a nice visualisation, or do some front end processing it’s something you’ve likely thought about if you are at all interested in audio. Apple does provide a sample application for illustrating this task (aurioTouch). The audio processing part of that App is obscured, however, imho, by extensive use of Open GL.
The goal of this project was to abstract as much of the audio dsp out of the sample app as possible and to render a visualisation using only the UIKit. The resulting application running in the iOS 6.1 simulator is shown to the left. There are three components. rscodepurple1 A simple ‘exit’ button at the top, a gain slider at the bottom (allowing the user to adjust the scaling between the magnitude spectrum and the brightness of the colour displayed), and a UIImageView displaying power spectrum data in the middle that refreshes every three seconds. Note that the frequencies run from low to the high beginning at the top of the image. So, the top of the display is actually DC while the bottom is the Nyquist rate. The image shows the results of processing some speech.
This particular Spectrogram App records audio samples at a rate of 11,025 Hz in frames that are 256 points long. That’s about 0.0232 seconds per frame. The frames are windowed using a 256 point Hanning window and overlap by 1/2 of a frame.
Let’s examine some of the relevant parts of the code that may cause confusion. If you want to try to build this project yourself you can find the source files in the archive below.
First of all look at the content of the PerformThru method. This is a callback for an audio unit. Here, it’s where we read the audio samples from a buffer into one of the arrays we have declared.
SInt8 *data_ptr = (SInt8 *)(ioData->mBuffers[0].mData);
for (i=0; i<inNumberFrames; i++) {
framea[readlas1] = data_ptr[2];
readlas1 += 1;
if (readlas1 >=33075) {
readlas1 = 0;
dispatch_async(dispatch_get_main_queue(), ^{
[THIS printframemethod];
});
}
data_ptr += 4;
}
Note that framea is a static array of length 33075. The variable readlas1 keeps track of how many samples have been read. When the counter hits 33075 (3 seconds at this sampling frequency) a call to another method printframemethod is triggered and the process restarts.
The spectrogram is calculated in printframemethod.
for (int b = 0; b < 33075; b++) {
originalReal[b]=((float)framea[b]) * (1.0 ); //+ (1.0 * hanningwindow[b]));
}
for (int mm = 0; mm < 250; mm++) {
for (int b = 0; b < 256; b++) {
tempReal[b]=((float)framea[b + (128 * mm)]) * (0.0 + 1.0 * hanningwindow[b]);
}
vDSP_ctoz((COMPLEX *) tempReal, 2, &A, 1, nOver2);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
scale = (float) 1. / 128.;
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
for (int b = 0; b < nOver2; b++) {
B.realp[b] = (32.0 * sqrtf((A.realp[b] * A.realp[b]) + (A.imagp[b] * A.imagp[b])));
}
for (int k = 0; k < 127; k++) {
Bspecgram[mm][k]=gainSlider.value * logf(B.realp[k]);
if (Bspecgram[mm][k]<0) {
Bspecgram[mm][k]=0.0;
}
}
}
Note that in this method we first cast the signed integer samples to floats and store in the array originalReal. Then the FFT of each frame is computed by calling the vDSP functions. The two-dimensional array Bspecgram contains the actual magnitude values of the Short Time Fourier Transform. Look at the code to see how these magnitude values are converted to RGB pixel data.
Things to note:
To get this to build just start a new single-view project and replace the delegate and view controller and add the aurio_helper files. You need to link the Accelerate, AudioToolbox, UIKit, Foundation, and CoreGraphics frameworks to build this. Also, you need PublicUtility. On my system, it is located at /Developer/Extras/CoreAudio/PublicUtility. Where you find it, add that directory to your header search paths.
Get the code:
The delegate, view controller, and helper files are included in this zip archive.
A Spectrogram App for iOS in purple
Apple's aurioTouch example app (on developer.apple.com) has source code for drawing an animated frequency spectrum plot from recorded audio input. You could probably group FFT bins into frequency ranges for a coarser bar graph plot.