Good day,
I have a document of data measured during experiments. The first columns of the document is time thereafter torque and displacement readings.
My measuring equipment were supposed to sample at 200Hz, however during the experiment as the measured data increased the computer slowed down resulting in sampling rates lower than 200Hz.
I however require readings at an exact sampling frequency (anything between 0 and 200Hz is acceptable), how can I modify/interpolate my data to correlate to the desired frequency?
For general resampling, use the resample function (see its doc for examples of use). It lets you specify the resampling factor as a rational number, with the limitation that the numerator and denominator can't be too large. That imposes restrictions when the resampling factor is very close to 1; other than that it is the way to go.
If you need to be very fine with your resampling factor (por example, correcting sampling frequency by amounts of the order of 1 part per million, which requires a resampling factor very close to 1), I suggest you use linear interpolation with the function interp1 (see its doc). This interpolation method is not as good as that of resample, but the error is negligible for resampling factors close to 1, and it lets you very fine control of the resampling factor.
Related
I'm dealing with CWT, and I have a big problem converting scales to frequencies. In the MAtlab Wavelet Tutorial they use this expression to convert scales to frequencies
But if i use the default function scal2freq I obtain different result.
I don't understand the role of the Morlet Fourier Factor
Thanks in advance
It is a pretty complicated concept, which I somewhat understand it. I'll write some points here so that you might figure it out yourself, rather easier.
A simple fact is that:
Scale is inversely proportional to frequency.
For example, imagine we have a 1-100 Hz range of frequencies in some time series data such as stock markets data or earthquake data. Scale is "supposed to be" the inverse of that. For instance, if scale would be in range of 1 to 100, we'd have had:
Scale(1/Hz) Frequency (Hz)
1 100
50 50
100 1
Therefore,
The frequency is not the real frequency of those time series data (e.g., stock market, earthquake) that we know of. They are only related, inversely.
And we can safely say that here we are calculating some "pseudo-frequencies", which MATLAB does that (by approximating that). You can read about the approximation process in the documentation in the section pseudo-frequencies:
MATlAB does calculate those pseudo-frequencies based on:
In wavelet analysis, the way to relate scales to frequencies is to determine the center frequency of the wavelet function:
which you can visually see in this image and of-course it would differ, when we would change the types of our function in the calculation. Thus, that center frequency will change everytime in our approximation process:
That "MorletFourierFactor" is a variable to approximate a constant so that when you would do the 1/scale, it would closely approximate those "pseudo-frequencies".
I thought this image about shifting (time axis) and scaling (frequency axis) might be a little helpful to look into as well:
The bottom line is that don't worry about pseudo-frequencies, you wouldn't probably need those. If you would want any frequency spectrum, you can likely go towards applying some of those frequency methods (such as Fast Fourier Transform) on whatever time series data that you have.
If you really really want to map that, you can also try to design some methods to approximate it yourself.
Source
Harvard Seismology
I have a 10 Hz time series measured by a fast instrument and a 1 minute time series measured by a slow reference instrument. The data consists of a fluctuating meteorological parameter. The slow reference instrument is used to calibrate the fast instrument measurements. Both time series are synchronised.
My idea:
Average the 10 Hz data into 1 minute blocks.
Take 5 one minute block from each time series and calculate the linear regression equations.
Use the regression equations to calibrate the 10 Hz data in 5 minute blocks (3000 data points).
What would be the best way to match (calibrate) the high frequency data using the low frequency data? I use MATLAB.
More background: The fast instrument outputs a fluctuating voltage signal while the slow instrument outputs the true value of a trace gas concentration in ppb (parts per billion). The slow instrument samples every ten seconds and outputs the average every one minute.
In short I would like to have my fast signal also in ppb but without losing it's integrity (I need the turbulent fluctuations to remain unfiltered), hence the need to use a linear fit.
Here's my approach and the results I got...
I modelled the problem as there being
a real (unmeasured by instruments) signal.
Let's call this real.
a slow signal - which is just the real signal sampled once a minute.
Let's call this lf (short for low frequency).
a fast signal - real signal + noise + signal drift.
Let's call this hf (short for high frequency).
The task was to take the slow and fast signals and try to reconstruct the real signal.
(Using least squares as a scoring metric)
Strategy:
Define a "piecewise linear filter" - this takes a signal, and returns a piecewise version of it. (With each piecewise part occurring where the slow signal is measured.)
NOTE: The slow signal is considered piecewise anyway.
Define a forwards-backwards low pass filter.
Define "uncertainty" to be 0 at the points where the low frequency signal is measured. It linearly increases to 1 when the timestamp is halfway between low frequency signal measurements.
Now, take your high frequency signal and filter it with the low pass filter.
Let's call this hf_lp
Take hf_lp and apply the "piecewise linear filter" to it.
Let's call this hf_lp_pl
Subtract the last two from each other.
I.e. hf_diff = hf_lp - hf_lp_pl.
You now want to find some function that estimates how by how much hf_diff should be added to the low frequency signal (lf) such that the squared error between real_estimated and real is minimized.
I fitted a function along the lines of real_estimated = lf + diff.*(a1*uncertainty + a2*uncertainty.^2 + a3*uncertainty.^3)
Use fminsearch or other optimization techniques to get a1, a2, a3...
Here is a sample plot of my results - you can see that real_estimated is much closer to real than the slow signal lf.
Closing thoughts...
The fast signal contains too much very low frequency (drift) and too much
very high frequency (noise) components.
But it has valuable medium frequency info.
The slow signal has perfect low frequency information, but no medium frequency info.
The strategy above is really just one way of extracting the medium frequencies from the fast signal and adding it to the low frequency signal.
This way, we get the best of all worlds: low frequencies, medium frequencies and low noise.
I have a signal that may not be periodic. We take about 15secs worth of samples (# 10kHz sampling rate) and we need to do the FFT on that signal to get the frequency content.
The problem is that we are implementing this FFT on an embedded system (DSP) which provides a library FFT of max. 1024 length. That is, it takes in 1024 data points as input and provides a 1024 point output.
What is the correct way of obtaining an FFT on the full 150000 point input?
You could run the FFT on each 1024 point block and average them to get an average power spectrum on the lower-resolution 1024-point frequency axis (512 samples from 0 to the Nyquist frequency, fs/2, so about 10 Hz resolution for your 10 kHz sampling). You should average the magnitudes of the component FFTs (i.e., sqrt(re^2+im^2)), otherwise the average will be sensitive to the drifting phase within each subwindow, which will depend on the precise frequency of the sinusoi.
If you think the periodic component may be at a low frequency, such that it will show up in a 15 sec sample but not complete any cycles in a 1024/10k ~ 100ms sample (i.e., below 10 Hz or so), you could downsample your input. You could try something as crude as averaging every 100 points to get a somewhat-distorted signal at 100 Hz sampling rate, then pack 10.24 sec worth into your 1024 pt sequence to pass to the FFT.
You could combine these two approaches by using a smaller downsampling factor and then do the magnitude-averaging of successive windows.
I'm confused why the system provides an FFT only up to 1024 points - is there something about the memory that makes it harder to access larger blocks?
Calculating a 128k point FFT using a 1k FFT as a subroutine is possible, but you'd end up recoding a lot of the FFT yourself. Maybe you should forget about the system library and use some other FFT implementation, without the length limitation, that will compile on your target. It may not incorporate all the optimizations of the system-provided one, but you're likely to lose a lot of that advantage when you embed it within the custom code needed to use the partial outputs of the multiple shorter FFTs to produce the long FFT.
Probably the quickest way to do the hybrid FFT (1024 points using the library, then added code to combine them into a 128k point FFT) would be to take an existing full FFT routine (a radix-2, decimation-in-time (DIT) routine for instance), but then modify it to use the system library for what would have been the first 10 stages, which amount to calculating 128 individual 1024-point FFTs on different subsets of the original signal (not, unfortunately, successive windows, but the partial-bit-reversed subsets), then let the remaining 7 stages of butterflies operate on those partial outputs. You'd want to get a pretty solid understanding of how the DIT FFT works to implement this.
I use Matlab to calculate the fft result of a time series data. The signal has an unknown fundamental frequency (~80 MHz in this case), together with several high order harmonics (1-20th order). However, due to finite sampling frequency (500 MHz in this case), I always get the mixing frequencies from high order frequency (7-20), e.g. 7th with a peak at abs(2*500-80*7)=440 MHz, 8th with frequency 360 MHz and 13th with a peak at abs(13*80-2*500)=40 MHz. Does anyone know how to get rid of these artificial mixing frequencies? One possible way is to increase the sampling frequency to sufficient large value. However, my data set has fixed number of data and time range. So the sampling frequency is actually determined by the property of the data set. Any solutions to this problem?
(I have image for this problem but I don't have enough reputation to post a image. Sorry for bring inconvenience for understanding this question)
You are hitting on a fundamental property of sampling - when you sample data at a fixed frequency fs, you cannot tell the difference between two signals with the same amplitude but different frequencies, where one has f1=fs/2 - d and the other has f2=f2/2 + d. This effect is frequently used to advantage - for example in mixers - but at other times, it's an inconvenience.
Unless you are looking for this mixing effect (done, for example, at the digital receiver in a modern MRI scanner), you need to apply a "brick wall filter" with a cutoff frequency of fs/2. It is not uncommon to have filters with a roll-off of 24 dB / octave or higher - in other words, they let "everything through" below the cutoff, and "stop everything" above it.
Data acquisition vendors will often supply filtering solutions with their ADC boards for exactly this reason.
Long way to say: "That's how digitization works". But it's true - that is how digitization works.
Typically, one low-pass filters the signal to below half the sample rate before sampling. Otherwise, after sampling, there is usually no way to separate any aliased high frequency noise (your high order harmonics) from the more useful spectrum below (Nyquist) half the sample rate.
If you don't filter the signal before sampling it, the defect is inherent in the sample vector, not the FFT.
I'm trying to do some signal processing using an audio file (piano recordings)
I find the note onsets and then perform FFT on each onset. However I find that for certain notes their 2nd harmonic has a way greater amplitude than he fundamental... Why is that???
How can I eliminate this and get the correct frequency??
Start by using a low-pass filter to trim out some of the higher-order harmonics. If the piano recordings that you are trying to process were recorded within a 3 octave range, that should help substantially.
Next, try adjusting your wave amplitude. Here's an article that discusses how harmonic distortion degrades a signal, and how you can exchange signal-to-noise ratio for harmonic distortion.
http://www.mathworks.com/help/signal/examples/analyzing-harmonic-distortion.html
If you want more of a home-built solution without signal filtering, here's what I'd try, assuming that the maximum signal amplitude corresponds either to the fundamental, 2nd harmonic, or 3rd harmonic
1) Find the frequency f of the maximum signal
2) If the signal at f/2 or f/3 is much greater than the noise floor, call that frequency your fundamental
Alternatively,
1) Find the frequency f of the maximum signal
2) Search above in the interval [f/2, 2*f] and find the peak nearest f.
3) Assume the difference between f and the nearest peak is 1 the fundamental frequency.
You'll need to adapt these methods to your data.
Make sure your data doesn't exhibit only odd order harmonics or has very strong high-order harmonics. These methods won't work well if multiple notes are played simultaneously.
You could also try correcting your data for human ear sensitivity, as that may be the reason why the 2nd harmonics are louder on an FFT than what the ear detects relative to the fundamental. See http://en.wikipedia.org/wiki/Absolute_threshold_of_hearing