I am new to Matlab and signal processing. I am having an issue with defining the frequency range in which the spectrogram is processed. When I am plotting the spectrogram of .wav audio data, the y axis, frequency, spans from zero to around 23 kHz. The useful data I am looking for is in the range of 200-400 Hz. My code snippet is:
[samFa, fs] = audioread('samFa.wav'); %convert audio to numerical data
samFa = samFa(:,1); %take only one channel of numerical output
spectrogram(samFA,2205,1200,12800, fs,'yaxis','MinThreshold',-80);
I don't want to be some noobie that runs into a problem and instantly gives up and posts a duplicate question to stackoverflow, so I have done as much digging as I can, but am at my wit's end.
I scoured the documentation for parameters or ways to have Matlab only analyze a subset or range of the data, but found nothing. Additionally, in all of the examples the frequency range seems to automatically adapt to the data set.
I know it is possible to just calculate the spectrogram for the entire range of frequencies, and then remove all of the unnecessary data through truncating or manually changing the limits in the plot itself, but changing plotting limits does not help with the numerical data.
I went searching through many similar questions, and found an answer all the way from 2012 here: Can I adjust spectogram frequency axes?
where the suggested answer was to import a vector of specific frequencies for the spectrogram to analyze. I tried passing a vector of integer values between 200 and 400, and a few other test ranges, but got the error:
Error using welchparse>welch_options (line 297)
The sampling frequency must be a scalar.
I've tried passing the parameter in at different places in the function, with no avail, and don't see anything regarding this parameter in the documentation, leading me to believe that this functionality was possibly removed sometime between 2012 and now.
When plotting spectrogram without providing signal frequency, Matlab provides a normalized spectrogram, which only provides a much smaller data window, which I can visually assess to cover the data from 0:5kHz (an artifact of overtones in the audio), so I know that matlab is not finding any data above this range to make the frequency range go to 20kHz
I've been trying to learn some signal processing for this project, so I believe the Nyquist frequency should be the maximum frequency that a Fourier transform is able to analyze, to be half the sampling frequency. My recording frequency is sampling at 44,100 Hz, and the spectrogram is ranging to around 22 or 23 kHz, leading me to believe that it's Matlab is noticing my sampling frequency and assuming that it needs to analyze up to such a high range.
For my work I am doing I am needing to produce thousands of spectrograms to then be processed through much further analysis, so it is very time consuming for Matlab to be processing so much unecessary data, and I would expect there to be some functionality in Matlab somehow to get around this.
Sorry for the very long post, but I wanted to fully explain my problem and show that I have done as much work as I could to solve the problem before turning for help. Thank you very much.
Get the axis handle and set the visual range there:
spectrogram(samFA,2205,1200,12800, fs,'yaxis','MinThreshold',-80);
ax=gca;
ylim(ax, [0.2,0.4]); %kHz
And if you want to calculate specific frequencies range to save time you better use goertzel.
f = 200:10:400;
freq_indices = round(f/fs*N) + 1;
dft_data = goertzel(data,freq_indices);
Related
I have audio record.
I want to detect sinusoidal pattern.
If i do regular fft i have result with bad SNR.
for example
my signal contents 4 high frequencies:
fft result:
To reduce noise i want to do Coherent integration as described in this article: http://flylib.com/books/en/2.729.1.109/1/
but i cant find any MATLAB examples how to do it. Sorry for bad english. Please help )
I look at spectra almost every day, but I never heard of 'coherent integration' as a method to calculate one. As also mentioned by Jason, coherent integration would only work when your signal has a fixed phase during every FFT you average over.
It is more likely that you want to do what the article calls 'incoherent integration'. This is more commonly known as calculating a periodogram (or Welch's method, a slightly better variant), in which you average the squared absolute value of the individual FFTs to obtain a power-spectral-density. To calculate a PSD in the correct way, you need to pay attention to some details, like applying a suitable Fourier window before doing each FFT, doing the proper normalization (so that the result is properly calibrated in i.e. Volt^2/Hz) and using half-overlapping windows to make use of all your data. All of this is implemented in Matlab's pwelch function, which is part of the signal-processing toolbox. See my answer to a similar question about how to use pwelch.
Integration or averaging of FFT frames just amounts to adding the frames up element-wise and dividing by the number of frames. Since MATLAB provides vector operations, you can just add the frames with the + operator.
coh_avg = (frame1 + frame2 + ...) / Nframes
Where frameX are the complex FFT output frames.
If you want to do non-coherent averaging, you just need to take the magnitude of the complex elements before adding the frames together.
noncoh_avg = (abs(frame1) + abs(frame2) + ...) / Nframes
Also note that in order for coherent averaging to work the best, the starting phase of the signal of interest needs to be the same for each FFT frame. Otherwise, the FFT bin with the signal may add in such a way that the amplitudes cancel out. This is usually a tough requirement to ensure without some knowledge of the signal or some external triggering so it is more common to use non-coherent averaging.
Non-coherent integration will not reduce the noise power, but it will increase signal to noise ratio (how the signal power compares to the noise power), which is probably what you really want anyway.
I think what you are looking for is the "spectrogram" function in Matlab, which computes the short time Fourier transform(STFT) of an input signal.
STFT
Spectrogram
I am new to matlab and signal processing methods, but i am trying to use its filter properties over a set of data I have. I have a collection of amplitude values obtained at different timestamps. When this is plotted, I get a waveform with several peaks that I can identify. I then perform calculations to derive the time between each consecutive peak and I want to eliminate the rates that are around the range of 48-52peaks per second.
What would be the correct way to go about processing this data step by step? Would a bandstop or notch filter be better if I want to eliminate those frequencies and not attenuate it simply? I am completely lost in the parameters required to feed into the filters for this. Please help...
periodogram is OK, but I would suggest using pwelch instead. It makes a more reasonable PSD estimate and the default parameters are well thought out (Hann windows, 50% overlap of segments, etc.)
If what you want is to remove signals in a wide band (e.g. 48-52 Hz) equally, rather than a single and unchanging frequency, than a bandstop filter is ideal. For example:
fs = 2048;
y = rand(fs*8, 1);
[b,a] = ellip(4, 2, 40, [46 54]/(fs/2));
yy = filter(b,a,y);
This will use a 4th order elliptic bandstop filter to filter the random data variable 'y'. filtfilt.m is also a nice function; it applies the filter forwards and backwards so you get twice the filter action and none of the phase lag or dispersion.
I am currently doing something similar to what you are doing.
I am processing a lot of signals from the Inertial Measurement Unit and motor drives. They all are asynchronously obtained, i.e. they all have very different timestamp and also very different acquisition frequency.
First thing I did was to interpolate all signals data in order to have all signals with same timestamp. You can use the matlab function interp to do this.
After this, you will have all signals with same sample frequency and also timestamp, which will be good in further analysis.
Ok, another thing you can do to analyse the frequency of the peaks is to perform the fourier transform. For beginners i advice the use of periodogram function and not the fft function.
Imagine you signal is x and your sample frequency (after interpolation) is Fs.
You can now use the function periodogram available in matlab like this:
[P,f] = periodogram(x,[],[length(t)],Fs);
This will give the power vs frequency of your signal. After that you will be able to plot and take a look at the frequencies of your signal. In other words, you be able to see the frequencies of the signals that make your acquired signal.
Plot the data this way:
plot(f,P); or semilogy(f,P);
The second is the same thing as the first, but with a logarithmic scale.
After this analysis you can use the Filter Desing and Analysis Tool to design you filter. Just type fdatool in matlab and it will open the design window. Choose the filter type, the cut and pass frequencies and click in design. This tool is very intuitive.
After designing you can export the filter to workspace.
Finally you can use the filter you designed in your signal to see if its what you wanted.
Use the functions filter os filtfilt for this.
Search in the web of the matlab help for the functions I wrote to get more details.
There are a lot of examples availables too.
I hope I could help you.
Good luck.
Can someone please give me an idea about this problem in matlab ,
I have 4 .wav files that contain the chirping of the birds . Each .wav file represents a different bird. Given an input .wav file , I need to decide which bird it is . I know I have to make frequency spectrum comparison to get to the solution . but don't quite know how i should use spectrogram to help me get there .
P.S. I know how what spectrogram does and have plotted quite a few .wav files with it though
There are several methods for patter recognition problem like the one that you are talking.
You can use a frequency analysis like FFT with the matlab function
S = SPECTROGRAM(X,WINDOW,NOVERLAP)
In SPECTROGRAM you need to define the time window of signal to be analysed in the variable WINDOW. You can use a rectangular window (example WINDOW = [1 1 1 1 1 1 1 ... 1]) with the number of values equal to the desired length. There are a lot of windows to use: hanning, hamming, blackman. You should used the one which is better to your problem. The NOVERLAP is the number of points that your windows moves in one step.
Besides this approach, wavelet transform is also a good technique to solve your problem. Matlab also have a good toolbox to apply discrete and continuous wavelets.
You might try to solve the problem with Deep Belief Networks
Here are some articles that might be helpful:
Audio Feature Extraction with Restricted Boltzmann Machines
Unsupervised feature learning for audio classification
To summarize the idea, instead of manually tune the features, employ RBMs or Autoencoder to extract features (bases) which represent the observed audio samples and then run learning algorithm.
You will need more than 4 audio samples in order to train DBN but it is worth trying as the approach has shown promising results in the past.
This tuturial might be also helpful.
this may prove to be a complicated problem. As a starting point I advise you to divide each record into some fixed length of frames like 20ms with 10ms overlap ,then extract fft of these frames and get some max energy freq. values for each frame. As a last step compare frame frequencies with each other and determine the result by selecting the maximum correlation
I'm looking for some functions in MATLAB in order to find out some parameters of sound,such az intensity,density,frequency,time and spectral identity.
i know how to use 'audiorecorder' as a function to record the sampled voice,and also 'getaudio', in order to plot it.But i need to realize the parametres of a sampled recorded voice,that i mentioned above.i'd be so thankful if anyone could help me.
This is a very vague question, you may want to narrow it down (at first) and to add as much contextual details as you can, it will certainly attract a lot more answers (also as mentionned by Ion, you could post it at http://dsp.stackexchange.com).
Sound intensity: microphones usually measures pressure, but you can get the intensity from that quite easily (see this question). Your main problem is that microphones are not usually calibrated, this means that you cannot associate an amplitude with a pressure. You can get sound density from sound intensity.
Frequency: you can get the spectrum of your sound by using the Fast Fourier Transform (see the Matlab function fft).
As for spectral or time identity, I believe these are psychoacoustics notions, which is not really my area of expertise.
I'm no expert but I have played with Matlab a little in the past.
One function I remember was wavread() to input a sound signal into Matlab, which if executed in this form [Y, FS, NBITS]=WAVREAD("AUDIO.WAV") would return something like:
AUDIO.WAV:
Fs = 100 kHz
Bits per sample = 10
Size = 100000
(numbers from the top of my head)
Now about the other things you ask, I'm not really sure. You can expect a better answer from somebody else. I think this question should be moved to Signal Processing SE btw.
I have FFT outputs that look like this:
At 523 Hz is the maximum value. However, being a messy FFT, there are lots of little peaks that are right near the large peaks. However, they're irrelevant, whereas the peaks shown aren't. Are the any algorithms I can use to extract the maxima of this FFT that matter; I.E., aren't just random peaks cropping up near "real" peaks? Perhaps there is some sort of filter I can apply to this FFT output?
EDIT: The context of this is that I am trying to take one-hit sound samples (like someone pressing a key on a piano) and extract the loudest partials. In the image below, the peaks above 2000 Hz are important, because they are discrete partials of the given sound (which happens to be a sort of bell). However, the peaks that are scattered about right near 523 seem to be just artifacts, and I want to ignore them.
If the peak is broad, it could indicate that the peak frequency is modulated (AM, FM or both), or is actually a composite of several spectral peaks, themselves each potentially modulated.
For instance, a piano note may be the result of the hammer hitting up to 3 strings that are all tuned just a tiny fraction differently, and they all can modulate as they exchange energy between strings though the piano frame. Guitar strings can change frequency as the pluck shape distortion smooths out and decays. Bells change shape after they are hit, which can modulate their spectrum. Etc.
If the sound itself is "messy" then you need a good definition of what you mean by the "real" peak, before applying any sort of smoothing or side-band rejection filter. e.g. All that "messiness" may be part of what makes a bell sound like a real bell instead of an electronic sinewave generator.
Try convolving your FFT (treating it as a signal) with a rectangular pulse( pulse = ones(1:20)/20; ). This might get rid of some of them. Your maxima will be shifted by 10 frequency bins to teh right, to take that into account. You would basically be integrating your signal. Similar techniques are used in Pan-Tompkins algorithm for heart beat identification.
I worked on a similar problem once, and choosed to use savitsky-golay filters for smoothing the spectrum data. I could get some significant peaks, and it didn't messed too much with the overall spectrum.
But I Had a problem with what hotpaw2 is alerting you, I have lost important characteristics along with the lost of "messiness", so I truly recommend you hear him. But, if you think you won't have a problem with that, I think savitsky-golay can help.
There are non-FFT methods for creating a frequency domain representation of time domain data which are better for noisy data sets, like Max-ent recontruction.
For noisy time-series data, a max-ent reconstruction will be capable of distinguising true peaks from noise very effectively (without adding any artifacts or other modifications to suppress noise).
Max ent works by "guessing" an FFT for a time domain specturm, and then doing an IFT, and comparing the results with the "actual" time-series data, iteratively. The final output of maxent is a frequency domain spectrum (like the one you show above).
There are implementations in java i believe for 1-d spectra, but I have never used one.