I have a signal that may not be periodic. We take about 15secs worth of samples (# 10kHz sampling rate) and we need to do the FFT on that signal to get the frequency content.
The problem is that we are implementing this FFT on an embedded system (DSP) which provides a library FFT of max. 1024 length. That is, it takes in 1024 data points as input and provides a 1024 point output.
What is the correct way of obtaining an FFT on the full 150000 point input?
You could run the FFT on each 1024 point block and average them to get an average power spectrum on the lower-resolution 1024-point frequency axis (512 samples from 0 to the Nyquist frequency, fs/2, so about 10 Hz resolution for your 10 kHz sampling). You should average the magnitudes of the component FFTs (i.e., sqrt(re^2+im^2)), otherwise the average will be sensitive to the drifting phase within each subwindow, which will depend on the precise frequency of the sinusoi.
If you think the periodic component may be at a low frequency, such that it will show up in a 15 sec sample but not complete any cycles in a 1024/10k ~ 100ms sample (i.e., below 10 Hz or so), you could downsample your input. You could try something as crude as averaging every 100 points to get a somewhat-distorted signal at 100 Hz sampling rate, then pack 10.24 sec worth into your 1024 pt sequence to pass to the FFT.
You could combine these two approaches by using a smaller downsampling factor and then do the magnitude-averaging of successive windows.
I'm confused why the system provides an FFT only up to 1024 points - is there something about the memory that makes it harder to access larger blocks?
Calculating a 128k point FFT using a 1k FFT as a subroutine is possible, but you'd end up recoding a lot of the FFT yourself. Maybe you should forget about the system library and use some other FFT implementation, without the length limitation, that will compile on your target. It may not incorporate all the optimizations of the system-provided one, but you're likely to lose a lot of that advantage when you embed it within the custom code needed to use the partial outputs of the multiple shorter FFTs to produce the long FFT.
Probably the quickest way to do the hybrid FFT (1024 points using the library, then added code to combine them into a 128k point FFT) would be to take an existing full FFT routine (a radix-2, decimation-in-time (DIT) routine for instance), but then modify it to use the system library for what would have been the first 10 stages, which amount to calculating 128 individual 1024-point FFTs on different subsets of the original signal (not, unfortunately, successive windows, but the partial-bit-reversed subsets), then let the remaining 7 stages of butterflies operate on those partial outputs. You'd want to get a pretty solid understanding of how the DIT FFT works to implement this.
Related
I had assumed that this was going to output a set frequency buckets that I could use to do pitch detection (like aubio pitch). But that doesn't seem to be what it does. I fired up the voice-omatic app and using it frequency display played various notes through my mic. Bars appear, all at the left hand end, almost no distinction between high and low notes. I upped the FFT size just to see if that changed anything, seems not. I found a picthdetect js project and saw that it used analyzer ,aha, here I will find the correct usage, but the meat of the code doesn't use frequency domain output, it feeds time domain into its own algorithm. So to solve my problem I will use that library, but I am still curious what the freqency domain data represents
It does exactly what you think it does, but the specific implementation on that site is not great for seeing that.
The values are mapped differently than is nescessary for seeing changes in pitch between high and low vocal range because the range is mapped linearly, not logrithmically. If you were to map each frequency bin in a logarithmic way you would get a far more useful diagram. Right now the right hand 3/4 of the visualizer is showing from 5000hz to 20000hz (roughly), which contains very very little data in an audio signal compared to 0-5000hz. The root frequency of the human voice (and most instruments) occupies mostly between 100-1000hz. There are harmonics above that, but at reducing amplitudes the higher you get.
I've tweaked the code to tell you the peak frequency and the size of each bucket. Use this app.js: https://puu.sh/Izcws/ded6aae55b.js - you can use a tone generator on your phone to see how accurate it is. https://puu.sh/Izcx3/d5a9d74764.png
The way the code works is it calculates how big each bucket is as described in my answer with var bucketSize = 1 / (analyser.fftSize / analyser.context.sampleRate); (and adds two spans to show the data) and then while drawing each bar calculate which bar is the biggest, and then multiply the size of each bucket by which bucket is the biggest to get the peak frequency (and write it in el2). You can play with the fftSize and see why using a small value will not work at all for determining whether you are playing A2(110 Hz) or A2#(116.5 Hz).
From https://stackoverflow.com/a/43369065/13801789:
I believe I understand what you mean exactly. The problem is not with your code, it is with the FFT underlying getByteFrequencyData. The core problem is that musical notes are logarithmically spaced while the FFT frequency bins are linearly spaced.
Notes are logarithmically spaced: The difference between consecutive low notes, say A2(110 Hz) and A2#(116.5 Hz) is 6.5 Hz while the difference between the same 2 notes on a higher octave A3(220 Hz) and A3#(233.1 Hz) is 13.1 Hz.
FFT bins are linearly spaced: Say we're working with 44100 samples per second, the FFT takes a window of 1024 samples (a wave), and multiplies it first with a wave as long as 1024 samples (let's call it wave1), so that would be a period of 1024/44100=0.023 seconds which is 43.48 Hz, and puts the resulting amplitude in the first bin. Then it multiplies it with a wave with frequency of wave1 * 2, which is 86.95 Hz, then wave1 * 3 = 130.43 Hz. So the difference between the frequencies is linear; it's always the same = 43.48, unlike the difference in musical notes which changes.
This is why close low notes will be bundled up in the same bin while close high notes are separated. This is the problem with FFT's frequency resolution. It can be solved by taking windows bigger than 1024 samples, but that would be a trade off for the time resolution.
I have a 10 Hz time series measured by a fast instrument and a 1 minute time series measured by a slow reference instrument. The data consists of a fluctuating meteorological parameter. The slow reference instrument is used to calibrate the fast instrument measurements. Both time series are synchronised.
My idea:
Average the 10 Hz data into 1 minute blocks.
Take 5 one minute block from each time series and calculate the linear regression equations.
Use the regression equations to calibrate the 10 Hz data in 5 minute blocks (3000 data points).
What would be the best way to match (calibrate) the high frequency data using the low frequency data? I use MATLAB.
More background: The fast instrument outputs a fluctuating voltage signal while the slow instrument outputs the true value of a trace gas concentration in ppb (parts per billion). The slow instrument samples every ten seconds and outputs the average every one minute.
In short I would like to have my fast signal also in ppb but without losing it's integrity (I need the turbulent fluctuations to remain unfiltered), hence the need to use a linear fit.
Here's my approach and the results I got...
I modelled the problem as there being
a real (unmeasured by instruments) signal.
Let's call this real.
a slow signal - which is just the real signal sampled once a minute.
Let's call this lf (short for low frequency).
a fast signal - real signal + noise + signal drift.
Let's call this hf (short for high frequency).
The task was to take the slow and fast signals and try to reconstruct the real signal.
(Using least squares as a scoring metric)
Strategy:
Define a "piecewise linear filter" - this takes a signal, and returns a piecewise version of it. (With each piecewise part occurring where the slow signal is measured.)
NOTE: The slow signal is considered piecewise anyway.
Define a forwards-backwards low pass filter.
Define "uncertainty" to be 0 at the points where the low frequency signal is measured. It linearly increases to 1 when the timestamp is halfway between low frequency signal measurements.
Now, take your high frequency signal and filter it with the low pass filter.
Let's call this hf_lp
Take hf_lp and apply the "piecewise linear filter" to it.
Let's call this hf_lp_pl
Subtract the last two from each other.
I.e. hf_diff = hf_lp - hf_lp_pl.
You now want to find some function that estimates how by how much hf_diff should be added to the low frequency signal (lf) such that the squared error between real_estimated and real is minimized.
I fitted a function along the lines of real_estimated = lf + diff.*(a1*uncertainty + a2*uncertainty.^2 + a3*uncertainty.^3)
Use fminsearch or other optimization techniques to get a1, a2, a3...
Here is a sample plot of my results - you can see that real_estimated is much closer to real than the slow signal lf.
Closing thoughts...
The fast signal contains too much very low frequency (drift) and too much
very high frequency (noise) components.
But it has valuable medium frequency info.
The slow signal has perfect low frequency information, but no medium frequency info.
The strategy above is really just one way of extracting the medium frequencies from the fast signal and adding it to the low frequency signal.
This way, we get the best of all worlds: low frequencies, medium frequencies and low noise.
I use Matlab to calculate the fft result of a time series data. The signal has an unknown fundamental frequency (~80 MHz in this case), together with several high order harmonics (1-20th order). However, due to finite sampling frequency (500 MHz in this case), I always get the mixing frequencies from high order frequency (7-20), e.g. 7th with a peak at abs(2*500-80*7)=440 MHz, 8th with frequency 360 MHz and 13th with a peak at abs(13*80-2*500)=40 MHz. Does anyone know how to get rid of these artificial mixing frequencies? One possible way is to increase the sampling frequency to sufficient large value. However, my data set has fixed number of data and time range. So the sampling frequency is actually determined by the property of the data set. Any solutions to this problem?
(I have image for this problem but I don't have enough reputation to post a image. Sorry for bring inconvenience for understanding this question)
You are hitting on a fundamental property of sampling - when you sample data at a fixed frequency fs, you cannot tell the difference between two signals with the same amplitude but different frequencies, where one has f1=fs/2 - d and the other has f2=f2/2 + d. This effect is frequently used to advantage - for example in mixers - but at other times, it's an inconvenience.
Unless you are looking for this mixing effect (done, for example, at the digital receiver in a modern MRI scanner), you need to apply a "brick wall filter" with a cutoff frequency of fs/2. It is not uncommon to have filters with a roll-off of 24 dB / octave or higher - in other words, they let "everything through" below the cutoff, and "stop everything" above it.
Data acquisition vendors will often supply filtering solutions with their ADC boards for exactly this reason.
Long way to say: "That's how digitization works". But it's true - that is how digitization works.
Typically, one low-pass filters the signal to below half the sample rate before sampling. Otherwise, after sampling, there is usually no way to separate any aliased high frequency noise (your high order harmonics) from the more useful spectrum below (Nyquist) half the sample rate.
If you don't filter the signal before sampling it, the defect is inherent in the sample vector, not the FFT.
I'm a neuroscientist, and not a very good one. My colleague has kindly provided me with a noisy voltage measurements of the PY neuron of the Stomatogastric Ganglion of the lobster.
The activity of this neuron is characterised by a slow depolarised plateaux with fast spikes on top (a burst).
Both idealised and noisy versions are presented here for you to peruse at your leisure.
It's my job to extract the spike times from the noisy signal but this is so far beyond my experience level I have no idea where to begin. Fortunately, I am a total ninja at Matlab.
Could someone kindly provide me with the name of the procedure, filter or smoothing function which is best suited for this task. Or even the appropriate forum to ask such an asinine question.
Presumably, it needs to increase the signal to noise ratio? The problem here seems to be determining the difference between noise and a bona fide spike as the margin between the two is quite small.
UPDATE: 02/07/2013
I have tried the following filters in Matlab with mixed results. It's still very hard to say what is noise and what is a spike.
Lowpass Butterworth filter,
median filter,
gaussian,
moving weighted window,
moving average filter,
smooth,
sgolay filter.
This may not be an adequate response for stackoverflow - but one way of increasing a signal to noise ratio in your case is to average parts of the signal.
low pass your signal to remove noise (and spikes), and find the minima of the filtered signal (from your image, one minimum every 600 data points). Keep the indexes of each minimum,
on the noisy signal, for each minimum index, select the consecutive 700 data points. If you have 50 minima, you should have a 50 by 700 matrix,
average your matrix. You should have a 1 by 700 vector.
By averaging parts of the signal (minimum-locked potentials), you will take advantage of two properties: noise is zero-mean (well, it should be), and the signal of interest is repetitive. The first will therefore decrease as you pile up potentials, and the second will increase. With this process however, you will lose the spike times for each slow wave figure, but at least have them for blocks of 50 minima.
This technique is known in neuroscience as event-related potential (http://en.wikipedia.org/wiki/Event-related_potential). It may not fit perfectly your signal, or the result may not give nice spikes, but you may extract the spike times for some periods of interest (given the nature of your signal, I would say that you would need 5 or 10 potentials to see an emerging mean activity).
There are some toolboxes that do part of the job (but I would program it myself given the complexity of the task). These are eeglab or fieldtrip. They have a bunch of filter/decomposition options too, as well as some statistical features.
I am a beginner in MATLAB and I should perform a spectral analysis of an EEG signal drawing the graphs of power spectral density and spectrogram. My signal is 10 seconds long and a sampling frequency of 160 Hz, a total of 1600 samples and have some questions on how to find the parameters of the functions in MATLAB, including:
pwelch (x, window, noverlap, nfft, fs);
spectrogram (x, window, noverlap, F, fs);
My question then is where to find values for the parameters window and noverlap I do not know what they are for.
To understand window functions & their use, let's first look at what happens when you take the DFT of finite length samples. Implicit in the definition of the discrete Fourier transform, is the assumption that the finite length of signal that you're considering, is periodic.
Consider a sine wave, sampled such that a full period is captured. When the signal is replicated, you can see that it continues periodically as an uninterrupted signal. The resulting DFT has only one non-zero component and that is at the frequency of the sinusoid.
Now consider a cosine wave with a different period, sampled such that only a partial period is captured. Now if you replicate the signal, you see discontinuities in the signal, marked in red. There is no longer a smooth transition and so you'll have leakage coming in at other frequencies, as seen below
This spectral leakage occurs through the side-lobes. To understand more about this, you should also read up on the sinc function and its Fourier transform, the rectangle function. The finite sampled sequence can be viewed as an infinite sequence multiplied by the rectangular function. The leakage that occurs is related to the side lobes of the sinc function (sinc & rectangular belong to self-dual space and are F.Ts of each other). This is explained in more detail in the spectral leakage article I linked to above.
Window functions
Window functions are used in signal processing to minimize the effect of spectral leakages. Basically, what a window function does is that it tapers the finite length sequence at the ends, so that when tiled, it has a periodic structure without discontinuities, and hence less spectral leakage.
Some of the common windows are Hanning, Hamming, Blackman, Blackman-Harris, Kaiser-Bessel, etc. You can read up more on them from the wiki link and the corresponding MATLAB commands are hann, hamming,blackman, blackmanharris and kaiser. Here's a small sample of the different windows:
You might wonder why there are so many different window functions. The reason is because each of these have very different spectral properties and have different main lobe widths and side lobe amplitudes. There is no such thing as a free lunch: if you want good frequency resolution (main lobe is thin), your sidelobes become larger and vice versa. You can't have both. Often, the choice of window function is dependent on the specific needs and always boils down to making a compromise. This is a very good article that talks about using window functions, and you should definitely read through it.
Now, when you use a window function, you have less information at the tapered ends. So, one way to fix that, is to use sliding windows with an overlap as shown below. The idea is that when put together, they approximate the original sequence as best as possible (i.e., the bottom row should be as close to a flat value of 1 as possible). Typical values vary between 33% to 50%, depending on the application.
Using MATLAB's spectrogram
The syntax is spectrogram(x,window,overlap,NFFT,fs)
where
x is your entire data vector
window is your window function. If you enter just a number, say W (must be integer), then MATLAB chops up your data into chunks of W samples each and forms the spectrogram from it. This is equivalent to using a rectangular window of length W samples. If you want to use a different window, provide hann(W) or whatever window you choose.
overlap is the number of samples that you need to overlap. So, if you need 50% overlap, this value should be W/2. Use floor(W/2) or ceil(W/2) if W can take odd values. This is just an integer.
NFFT is the FFT length
fs is the sampling frequency of your data vector. You can leave this empty, and MATLAB plots the figure in terms of normalized frequencies and the time axis as simply the data chunk index. If you enter it, MATLAB scales the axis accordingly.
You can also get optional outputs such as the time vector and frequency vector and the power spectrum computed, for use in other computations or if you need to style your plot differently. Refer to the documentation for more info.
Here's an example with 1 second of a linear chirp signal from 20 Hz to 400 Hz, sampled at 1000 Hz. Two window functions are used, Hanning and Blackman-Harris, with and without overlaps. The window lengths were 50 samples, and overlap of 50%, when used. The plots are scaled to the same 80dB range in each plot.
You can notice the difference in the figures (top-bottom) due to the overlap. You get a cleaner estimate if you use overlap. You can also observe the trade-off between main lobe width and side lobe amplitude that I mentioned earlier. Hanning has a thinner main lobe (prominent line along the skew diagonal), resulting in better frequency resolution, but has leaky sidelobes, seen by the bright colors outside. Blackwell-Harris, on the other hand, has a fatter main lobe (thicker diagonal line), but less spectral leakage, evidenced by the uniformly low (blue) outer region.
Both these methods above are short-time methods of operating on signals. The non-stationarity of the signal (where statistics are a function of time, Say mean, among other statistics, is a function of time) implies that you can only assume that the statistics of the signal are constant over short periods of time. There is no way of arriving at such a period of time (for which the statistics of the signal are constant) exactly and hence it is mostly guess work and fine-tuning.
Say that the signal you mentioned above is non-stationary (which EEG signals are). Also assume that it is stationary only for about 10ms or so. To reliably measure statistics like PSD or energy, you need to measure these statistics 10ms at a time. The window-ing function is what you multiply the signal with to isolate that 10ms of a signal, on which you will be computing PSD etc.. So now you need to traverse the length of the signal. You need a shifting window (to window the entire signal 10ms at a time). Overlapping the windows gives you a more reliable estimate of the statistics.
You can imagine it like this:
1. Take the first 10ms of the signal.
2. Window it with the windowing function.
3. Compute statistic only on this 10ms portion.
4. Move the window by 5ms (assume length of overlap).
5. Window the signal again.
6. Compute statistic again.
7. Move over entire length of signal.
There are many different types of window functions - Blackman, Hanning, Hamming, Rectangular. That and the length of the window and overlap really depend on the application that you have and the frequency characteristics of the signal itself.
As an example, in speech processing (where the signals are non-stationary and windowing gets used a lot), the most popular choices for windowing functions are Hamming/Hanning of length 10ms (320 samples at 16 kHz sampling) with an overlap of 80 samples (25% of window length). This works reasonably well. You can use this as a starting point for your application and then work on fine-tuning it a little more with different values.
You may also want to take a look at the following functions in MATLAB:
1. hamming
2. hanning
I hope you know that you can call up a ton of help in MATLAB using the help command on the command line. MATLAB is one of the best documented softwares out there. Using the help command for pwelch also pulls up definitions for window size and overlap. That should help you out too.
I don't know if all this info. helped you out or not, but looking at the question, I felt you might have needed a little help with understanding what windowing and overlapping was all about.
HTH,
Sriram.
For the last parameter fs, that is the frequency rate of the raw signal, in your case X, when you extract X from audio data using function
[X,fs]=audioread('song.mp3')
You may get fs from it.
Investigate how the following parameters change the performance of the Sinc function:
The Length of the coefficients
The Following window functions:
Blackman Harris
Hanning
Bartlett