I'm dealing with CWT, and I have a big problem converting scales to frequencies. In the MAtlab Wavelet Tutorial they use this expression to convert scales to frequencies
But if i use the default function scal2freq I obtain different result.
I don't understand the role of the Morlet Fourier Factor
Thanks in advance
It is a pretty complicated concept, which I somewhat understand it. I'll write some points here so that you might figure it out yourself, rather easier.
A simple fact is that:
Scale is inversely proportional to frequency.
For example, imagine we have a 1-100 Hz range of frequencies in some time series data such as stock markets data or earthquake data. Scale is "supposed to be" the inverse of that. For instance, if scale would be in range of 1 to 100, we'd have had:
Scale(1/Hz) Frequency (Hz)
1 100
50 50
100 1
Therefore,
The frequency is not the real frequency of those time series data (e.g., stock market, earthquake) that we know of. They are only related, inversely.
And we can safely say that here we are calculating some "pseudo-frequencies", which MATLAB does that (by approximating that). You can read about the approximation process in the documentation in the section pseudo-frequencies:
MATlAB does calculate those pseudo-frequencies based on:
In wavelet analysis, the way to relate scales to frequencies is to determine the center frequency of the wavelet function:
which you can visually see in this image and of-course it would differ, when we would change the types of our function in the calculation. Thus, that center frequency will change everytime in our approximation process:
That "MorletFourierFactor" is a variable to approximate a constant so that when you would do the 1/scale, it would closely approximate those "pseudo-frequencies".
I thought this image about shifting (time axis) and scaling (frequency axis) might be a little helpful to look into as well:
The bottom line is that don't worry about pseudo-frequencies, you wouldn't probably need those. If you would want any frequency spectrum, you can likely go towards applying some of those frequency methods (such as Fast Fourier Transform) on whatever time series data that you have.
If you really really want to map that, you can also try to design some methods to approximate it yourself.
Source
Harvard Seismology
Related
I have an acoustic waveform of a Spanish phoneme and I'd like to compute its magnitude spectrum and plot it in dB magnitude on a linear frequency scale. How would I be able to accomplish this in MATLAB?
Thanks
First a quick heads up: At stackoverflow you are expected to show some of your own efforts to solve the problem and then ask for help.
Now to your question:
You can plot the spectrogram using the "spectrogram" Matlab function.
[s,f,t] = spectrogram(x,window,noverlap,f,fs)
Check the details here: https://www.mathworks.com/help/signal/ref/spectrogram.html
For a speech signal you will want to specify the sampling frequency "fs" (you can get that when you read the file using:
[y,Fs] = audioread(filename)
You will probably want to specify the variables "window" and "noverlap" since speech signals can show distinct properties depending on the dimension of the window (fast phenomena will not be visible on big windows ). A typical values are 20ms windows with 10ms overlap (select the best value by considering your sampling frequency and the nearest 2^n value for fast Fourier calculation).
The window size and overlap are also valid when you calculate spectrum. If you apply FFT to the whole waveform then you will get the "average" spectral information for the sentence. To catch specific phenomena you must use windowing techniques and perform a short-term Fourier analysis.
use sptool
Signal Processing toolbox Show its Document
Can anybody kindly explain how the Orthogonal Frequency Division Multiple Access works and its advantages in simple English, avoiding using “Fourier”? I am totally confused about many descriptions that using “Fourier” things to explain. (Or if anyone can make “Fourier” things clearer to understand...)
In wireless it is common practice to view signal in time domain or frequency domain. For example, in time domain a simple sinusoid signal with a frequency of one Hertz (one cycle per second and hence time period of one second) shows that at the time axis sine wave will repeat itself after one second . We can view the same signal in the frequency domain as well, where it will be just a delta function at One Hertz frequency (in the frequency domain, the x-axis represent frequency and in time domain x-axis represents time). Think of the Fourier as a Tardis machine, you punch in some numbers on this machine and it will take you to frequency world to display how the same thing exist there and of course, you can get back to time world again by reversing the process which brought you in frequency world at the first place. So it is easy to see that a never ending time domain signal of time period one second is just represented by one point in the frequency domain.
What is orthogonality? To explain this, let us talk about things in frequency domain only. As usual, the signals are represented in complex numbers or complex exponentials. We can view these complex exponentials or signals in two-dimensional coordinates form as well which is called vector representation of a signal. A cosine wave of magnitude one in represented by (1 + 0 * i) or in vector form [ 1; 0 ] and Sine wave of magnitude one is represented by ( 0 + 1i ) or in vector form [0;1 ]. Two signals are said to be orthogonal if the dot product of their vectors is equal to zero. Hence cosine and sine waves are orthogonal to each other. Which simply means that if we view those two signals in time domain, there will be a moment when Cosine will be at its peak but at the same time Sine wave will be zero.
OFDM exploit this property of orthogonality and put information bits on those orthogonal signals at one time. Since signals are orthogonal, the receiver just needs to know the frequency and exact phase to retrieve all the information (by sampling process). This process provides protection against inter-symbol interference (ISI), the major advantage of OFDM technique. Also OFDM provides huge advantage in frequency selective Fading because it breaks wide band spectrum (carrier) into small chunks of spectrums (subcarriers).
Hope it might help.
Let me try to make Fourier things easier to understand. Maybe it helps. Each time-dependent signal can be written as the weighted sum of sine and cosine 'waves' (meaning sines and cosines as function of time). The correct weight of a certain sine or cosine denotes how strong that particular frequency is present in your signal.
So on one hand you can represent your signal by telling how strong it is on each time instant. In other words, it can be represented by a function of time: strength (t).
On the other hand your can also represent that same signal by telling how strong each frequency is present in it. In other words, it can be represented by a function of frequency: weight (f),
Computing weight (f) from strength (t) is called Fourier transform.
Computing strength (t) from weight (f) is called inverse Fourier transform.
While this sounds complicated, your ears are doing it all the time. You don't hear a time signal, you hear frequencies, high and low 'notes'.
Computing Fourier transforms used to be very time consuming, until something called FFT (Fast Fourier Transform) was invented. It is just a computational trick and you don't need to know anything of it to understand what a Fourier transform is.
I don't pretend this to be a full answer to your question but maybe it helps.
PS. As for OFDMA, of course radio signals are not sounds. But the principle is the same.
Without using FFT and Inverse FFT, OFDMA can be considered similar to FDMA.
I have audio record.
I want to detect sinusoidal pattern.
If i do regular fft i have result with bad SNR.
for example
my signal contents 4 high frequencies:
fft result:
To reduce noise i want to do Coherent integration as described in this article: http://flylib.com/books/en/2.729.1.109/1/
but i cant find any MATLAB examples how to do it. Sorry for bad english. Please help )
I look at spectra almost every day, but I never heard of 'coherent integration' as a method to calculate one. As also mentioned by Jason, coherent integration would only work when your signal has a fixed phase during every FFT you average over.
It is more likely that you want to do what the article calls 'incoherent integration'. This is more commonly known as calculating a periodogram (or Welch's method, a slightly better variant), in which you average the squared absolute value of the individual FFTs to obtain a power-spectral-density. To calculate a PSD in the correct way, you need to pay attention to some details, like applying a suitable Fourier window before doing each FFT, doing the proper normalization (so that the result is properly calibrated in i.e. Volt^2/Hz) and using half-overlapping windows to make use of all your data. All of this is implemented in Matlab's pwelch function, which is part of the signal-processing toolbox. See my answer to a similar question about how to use pwelch.
Integration or averaging of FFT frames just amounts to adding the frames up element-wise and dividing by the number of frames. Since MATLAB provides vector operations, you can just add the frames with the + operator.
coh_avg = (frame1 + frame2 + ...) / Nframes
Where frameX are the complex FFT output frames.
If you want to do non-coherent averaging, you just need to take the magnitude of the complex elements before adding the frames together.
noncoh_avg = (abs(frame1) + abs(frame2) + ...) / Nframes
Also note that in order for coherent averaging to work the best, the starting phase of the signal of interest needs to be the same for each FFT frame. Otherwise, the FFT bin with the signal may add in such a way that the amplitudes cancel out. This is usually a tough requirement to ensure without some knowledge of the signal or some external triggering so it is more common to use non-coherent averaging.
Non-coherent integration will not reduce the noise power, but it will increase signal to noise ratio (how the signal power compares to the noise power), which is probably what you really want anyway.
I think what you are looking for is the "spectrogram" function in Matlab, which computes the short time Fourier transform(STFT) of an input signal.
STFT
Spectrogram
I have FFT outputs that look like this:
At 523 Hz is the maximum value. However, being a messy FFT, there are lots of little peaks that are right near the large peaks. However, they're irrelevant, whereas the peaks shown aren't. Are the any algorithms I can use to extract the maxima of this FFT that matter; I.E., aren't just random peaks cropping up near "real" peaks? Perhaps there is some sort of filter I can apply to this FFT output?
EDIT: The context of this is that I am trying to take one-hit sound samples (like someone pressing a key on a piano) and extract the loudest partials. In the image below, the peaks above 2000 Hz are important, because they are discrete partials of the given sound (which happens to be a sort of bell). However, the peaks that are scattered about right near 523 seem to be just artifacts, and I want to ignore them.
If the peak is broad, it could indicate that the peak frequency is modulated (AM, FM or both), or is actually a composite of several spectral peaks, themselves each potentially modulated.
For instance, a piano note may be the result of the hammer hitting up to 3 strings that are all tuned just a tiny fraction differently, and they all can modulate as they exchange energy between strings though the piano frame. Guitar strings can change frequency as the pluck shape distortion smooths out and decays. Bells change shape after they are hit, which can modulate their spectrum. Etc.
If the sound itself is "messy" then you need a good definition of what you mean by the "real" peak, before applying any sort of smoothing or side-band rejection filter. e.g. All that "messiness" may be part of what makes a bell sound like a real bell instead of an electronic sinewave generator.
Try convolving your FFT (treating it as a signal) with a rectangular pulse( pulse = ones(1:20)/20; ). This might get rid of some of them. Your maxima will be shifted by 10 frequency bins to teh right, to take that into account. You would basically be integrating your signal. Similar techniques are used in Pan-Tompkins algorithm for heart beat identification.
I worked on a similar problem once, and choosed to use savitsky-golay filters for smoothing the spectrum data. I could get some significant peaks, and it didn't messed too much with the overall spectrum.
But I Had a problem with what hotpaw2 is alerting you, I have lost important characteristics along with the lost of "messiness", so I truly recommend you hear him. But, if you think you won't have a problem with that, I think savitsky-golay can help.
There are non-FFT methods for creating a frequency domain representation of time domain data which are better for noisy data sets, like Max-ent recontruction.
For noisy time-series data, a max-ent reconstruction will be capable of distinguising true peaks from noise very effectively (without adding any artifacts or other modifications to suppress noise).
Max ent works by "guessing" an FFT for a time domain specturm, and then doing an IFT, and comparing the results with the "actual" time-series data, iteratively. The final output of maxent is a frequency domain spectrum (like the one you show above).
There are implementations in java i believe for 1-d spectra, but I have never used one.
Does anyone know how to use filters in MATLAB?
I am not an aficionado, so I'm not concerned with roll-off characteristics etc — I have a 1 dimensional signal vector x sampled at 100 kHz, and I want to perform a high pass filtering on it (say, rejecting anything below 10Hz) to remove the baseline drift.
There are Butterworth, Elliptical, and Chebychev filters described in the help, but no simple explanation as to how to implement.
There are several filters that can be used, and the actual choice of the filter will depend on what you're trying to achieve. Since you mentioned Butterworth, Chebyschev and Elliptical filters, I'm assuming you're looking for IIR filters in general.
Wikipedia is a good place to start reading up on the different filters and what they do. For example, Butterworth is maximally flat in the passband and the response rolls off in the stop band. In Chebyschev, you have a smooth response in either the passband (type 2) or the stop band (type 1) and larger, irregular ripples in the other and lastly, in Elliptical filters, there's ripples in both the bands. The following image is taken from wikipedia.
So in all three cases, you have to trade something for something else. In Butterworth, you get no ripples, but the frequency response roll off is slower. In the above figure, it takes from 0.4 to about 0.55 to get to half power. In Chebyschev, you get steeper roll off, but you have to allow for irregular and larger ripples in one of the bands, and in Elliptical, you get near-instant cut off, but you have ripples in both bands.
The choice of filter will depend entirely on your application. Are you trying to get a clean signal with little to no losses? Then you need something that gives you a smooth response in the passband (Butterworth/Cheby2). Are you trying to kill frequencies in the stopband, and you won't mind a minor loss in the response in the passband? Then you will need something that's smooth in the stop band (Cheby1). Do you need extremely sharp cut-off corners, i.e., anything a little beyond the passband is detrimental to your analysis? If so, you should use Elliptical filters.
The thing to remember about IIR filters is that they've got poles. Unlike FIR filters where you can increase the order of the filter with the only ramification being the filter delay, increasing the order of IIR filters will make the filter unstable. By unstable, I mean you will have poles that lie outside the unit circle. To see why this is so, you can read the wiki articles on IIR filters, especially the part on stability.
To further illustrate my point, consider the following band pass filter.
fpass=[0.05 0.2];%# passband
fstop=[0.045 0.205]; %# frequency where it rolls off to half power
Rpass=1;%# max permissible ripples in stopband (dB)
Astop=40;%# min 40dB attenuation
n=cheb2ord(fpass,fstop,Rpass,Astop);%# calculate minimum filter order to achieve these design requirements
[b,a]=cheby2(n,Astop,fstop);
Now if you look at the zero-pole diagram using zplane(b,a), you'll see that there are several poles (x) lying outside the unit circle, which makes this approach unstable.
and this is evident from the fact that the frequency response is all haywire. Use freqz(b,a) to get the following
To get a more stable filter with your exact design requirements, you'll need to use second order filters using the z-p-k method instead of b-a, in MATLAB. Here's how for the same filter as above:
[z,p,k]=cheby2(n,Astop,fstop);
[s,g]=zp2sos(z,p,k);%# create second order sections
Hd=dfilt.df2sos(s,g);%# create a dfilt object.
Now if you look at the characteristics of this filter, you'll see that all the poles lie inside the unit circle (hence stable) and matches the design requirements
The approach is similar for butter and ellip, with equivalent buttord and ellipord. The MATLAB documentation also has good examples on designing filters. You can build upon these examples and mine to design a filter according to what you want.
To use the filter on your data, you can either do filter(b,a,data) or filter(Hd,data) depending on what filter you eventually use. If you want zero phase distortion, use filtfilt. However, this does not accept dfilt objects. So to zero-phase filter with Hd, use the filtfilthd file available on the Mathworks file exchange site
EDIT
This is in response to #DarenW's comment. Smoothing and filtering are two different operations, and although they're similar in some regards (moving average is a low pass filter), you can't simply substitute one for the other unless it you can be sure that it won't be of concern in the specific application.
For example, implementing Daren's suggestion on a linear chirp signal from 0-25kHz, sampled at 100kHz, this the frequency spectrum after smoothing with a Gaussian filter
Sure, the drift close to 10Hz is almost nil. However, the operation has completely changed the nature of the frequency components in the original signal. This discrepancy comes about because they completely ignored the roll-off of the smoothing operation (see red line), and assumed that it would be flat zero. If that were true, then the subtraction would've worked. But alas, that is not the case, which is why an entire field on designing filters exists.
Create your filter - for example using [B,A] = butter(N,Wn,'high') where N is the order of the filter - if you are unsure what this is, just set it to 10. Wn is the cutoff frequency normalized between 0 and 1, with 1 corresponding to half the sample rate of the signal. If your sample rate is fs, and you want a cutoff frequency of 10 Hz, you need to set Wn = (10/(fs/2)).
You can then apply the filter by using Y = filter(B,A,X) where X is your signal. You can also look into the filtfilt function.
A cheapo way to do this kind of filtering that doesn't involve straining brain cells on design, zeros and poles and ripple and all that, is:
* Make a copy of the signal
* Smooth it. For a 100KHz signal and wanting to eliminate about 10Hz on down, you'll need to smooth over about 10,000 points. Use a Gaussian smoother, or a box smoother maybe 1/2 that width twice, or whatever is handy. (A simple box smoother of total width 10,000 used once may produce unwanted edge effects)
* Subtract the smoothed version from the original. Baseline drift will be gone.
If the original signal is spikey, you may want to use a short median filter before the big smoother.
This generalizes easily to 2D images, 3D volume data, whatever.