Sensor Decimation - filtering

I have a quick question on sensor data decimation, which I'm sure is pretty easy but thought I'd check. I have a sensor that is sampling at 25Hz and the data is being sent across a serial RS232 connection to an external data logger, which is logging the data at 10Hz.
I think if I want to recover a true 10Hz signal, I should I pass the original 25 Hz signal through a decimation process (i.e. filtering followed by down sampling). Is this correct?
If it is correct, I was thinking that I should decimate the original 25Hz signal by passing it through a low pass filter with a cutoff frequency of ~10 Hz, to remove the higher frequency components. The filtered signal would then be down sampled to produce a final signal. This down sampling would be achieved by extracting a value every 2.5 samples from the filtered signal.
So in other words, the 1st value of the final signal would be the first sample of the filtered signal. The second value of the final signal would be the average of samples 2 & 3 from the filtered signal. Then the third value of the final signal would be sample 5 from the filtered signal, and the fourth sample would be an average between samples 7 & 8 etc.....
Hopefully that makes sense. I think that would provide me with a clean 10Hz signal.
Many thanks for your time and efforts on this, they are very much appreciated
Cheers

The type of filtering you should use will depend in part on what you are using this signal for and how noisy the captured data is.
In general, you should not change the sampling frequency of the filter constantly, this could introduce artificial periodic noise in the captured data. My guess is that for this process you are sampling something that doesn't change rapidly. You might want to just take a rolling average of the last 3 samples, even though some of the data averaged into each logged sample will overlap.

Related

Lowpass filter Phase Error

As the graph shows, I have slightly over 0.01 milliseconds delay introduced by transfer function for a simple low-pass ASK filter at the demodulation part.
I need to get rid of this delay by any means.
Scope Results
I tried to increase the frequency at the denominator coefficient of the transfer function, but still with the same delay.
In the last attempts, I tried to create a subsystem that outputs binary 1 at interval or 0.5 milliseconds if it is bigger than 0.5e-6 as threshold, and hold the value until the coming 1.5 millisecond where it should outputs 0 if it is less than 0.05e-6 and so on. I tried to follow this here, but it didn't work on my scenario. I also tried this here, but my attempts failed.
Here is an overall implementation for the demodulation part using simulink.
And the following is the transfer function for a simple low-pass ASK filter:
Help here is much appreciated.
It is impossible for a linear filter to filter a signal (for any finite bandwidth above DC) without a delay. It takes some time (usually related to the period of the center frequency of a bandpass filter), for the filter to gather enough information from the signal to differentiate between a waveform to pass and a waveform to attenuate.
You might be able to pass a sharper rise time or fall time by using a matched filter with the expected transient(s) as the template(s), but that would have an even greater delay.
Usually this delay is accounted for by using a matching delay in other parts of the system to synchronize timing as needed.

Low pass filter with clipped data

I have a set of data that basically consist of one low frequency component and one high frequency component, where the low frequency is what I would like to recover. This to me seems like a perfect use case for a low pass filter, however, a problem arises since the data is clipped.
As the clipped points basically are constants for short intervals, they will add some low frequency junk which disturbs the signal of interest. I have tried getting around the problem by simply omitting the points subject to clipping, but this method seems slightly naive, is there a better way?
I have included a few figures which shows simulated data to illustrate what I am working with.
Typical signal, starts with values close to zero and then both the low frequency as well as high frequency signal kicks in simultaniously.
Running the high frequency signal through a low pass filter yields the following results. Note the difference between having clipping in the data and without.
The signal after lowpass filtering. Note the difference between when no clipping is present and when there is.
When filtering the data, I use Matlabs built in function fir1, using the following call:
Signal_lowpass = filter(fir1(100, fc, 'low'), 1, Signal);
All the plots that you have shown are time-domain representation of your signals. Here it would help if you show the frequency response (magnitude response from fft should suffice) of your clipped signal and also the frequency response of low pass filtered signal. From frequency response of your signal one can design a filter which would eliminate clipping effects as well as the high pass signal. If your low pass signal is a single tone (which it looks like from time domain graphs) a bandpass filter around its frequency would help to extract it.

Using low frequency data to calibrate high frequency data

I have a 10 Hz time series measured by a fast instrument and a 1 minute time series measured by a slow reference instrument. The data consists of a fluctuating meteorological parameter. The slow reference instrument is used to calibrate the fast instrument measurements. Both time series are synchronised.
My idea:
Average the 10 Hz data into 1 minute blocks.
Take 5 one minute block from each time series and calculate the linear regression equations.
Use the regression equations to calibrate the 10 Hz data in 5 minute blocks (3000 data points).
What would be the best way to match (calibrate) the high frequency data using the low frequency data? I use MATLAB.
More background: The fast instrument outputs a fluctuating voltage signal while the slow instrument outputs the true value of a trace gas concentration in ppb (parts per billion). The slow instrument samples every ten seconds and outputs the average every one minute.
In short I would like to have my fast signal also in ppb but without losing it's integrity (I need the turbulent fluctuations to remain unfiltered), hence the need to use a linear fit.
Here's my approach and the results I got...
I modelled the problem as there being
a real (unmeasured by instruments) signal.
Let's call this real.
a slow signal - which is just the real signal sampled once a minute.
Let's call this lf (short for low frequency).
a fast signal - real signal + noise + signal drift.
Let's call this hf (short for high frequency).
The task was to take the slow and fast signals and try to reconstruct the real signal.
(Using least squares as a scoring metric)
Strategy:
Define a "piecewise linear filter" - this takes a signal, and returns a piecewise version of it. (With each piecewise part occurring where the slow signal is measured.)
NOTE: The slow signal is considered piecewise anyway.
Define a forwards-backwards low pass filter.
Define "uncertainty" to be 0 at the points where the low frequency signal is measured. It linearly increases to 1 when the timestamp is halfway between low frequency signal measurements.
Now, take your high frequency signal and filter it with the low pass filter.
Let's call this hf_lp
Take hf_lp and apply the "piecewise linear filter" to it.
Let's call this hf_lp_pl
Subtract the last two from each other.
I.e. hf_diff = hf_lp - hf_lp_pl.
You now want to find some function that estimates how by how much hf_diff should be added to the low frequency signal (lf) such that the squared error between real_estimated and real is minimized.
I fitted a function along the lines of real_estimated = lf + diff.*(a1*uncertainty + a2*uncertainty.^2 + a3*uncertainty.^3)
Use fminsearch or other optimization techniques to get a1, a2, a3...
Here is a sample plot of my results - you can see that real_estimated is much closer to real than the slow signal lf.
Closing thoughts...
The fast signal contains too much very low frequency (drift) and too much
very high frequency (noise) components.
But it has valuable medium frequency info.
The slow signal has perfect low frequency information, but no medium frequency info.
The strategy above is really just one way of extracting the medium frequencies from the fast signal and adding it to the low frequency signal.
This way, we get the best of all worlds: low frequencies, medium frequencies and low noise.

FFT in Matlab in order to find signal frequency and create a graph with peaks

I have data from an accelerometer and made a graph of acceleration(y-axis) and time (x-axis). The frequency rate of the sensor is arround 100 samples per second. but there is no equally spaced time (for example it goes from 10.046,10.047,10.163 etc) the pace is not const. And there is no function of the signal i get. I need to find the frequency of the signal and made a graph of frequency(Hz x-axis) and acceleration (y-axis). but i don't know which code of FFT suits my case. (sorry for bad english)
Any help would be greatly appreciated
For an FFT to work you will need to reconstruct the signal you have with with a regular interval. There are two ways you can do this:
Interpolate the data you already have to make an accurate guess at where the signal would be at a regular interval. However, this FFT may contain significant inaccuracies.
OR
Adjust the device reading from the accelerometer incorporate an accurate timer such that results are always transmitted at regular intervals. This is what I would recommend.

fft artificial defects due to finite sampling frequency

I use Matlab to calculate the fft result of a time series data. The signal has an unknown fundamental frequency (~80 MHz in this case), together with several high order harmonics (1-20th order). However, due to finite sampling frequency (500 MHz in this case), I always get the mixing frequencies from high order frequency (7-20), e.g. 7th with a peak at abs(2*500-80*7)=440 MHz, 8th with frequency 360 MHz and 13th with a peak at abs(13*80-2*500)=40 MHz. Does anyone know how to get rid of these artificial mixing frequencies? One possible way is to increase the sampling frequency to sufficient large value. However, my data set has fixed number of data and time range. So the sampling frequency is actually determined by the property of the data set. Any solutions to this problem?
(I have image for this problem but I don't have enough reputation to post a image. Sorry for bring inconvenience for understanding this question)
You are hitting on a fundamental property of sampling - when you sample data at a fixed frequency fs, you cannot tell the difference between two signals with the same amplitude but different frequencies, where one has f1=fs/2 - d and the other has f2=f2/2 + d. This effect is frequently used to advantage - for example in mixers - but at other times, it's an inconvenience.
Unless you are looking for this mixing effect (done, for example, at the digital receiver in a modern MRI scanner), you need to apply a "brick wall filter" with a cutoff frequency of fs/2. It is not uncommon to have filters with a roll-off of 24 dB / octave or higher - in other words, they let "everything through" below the cutoff, and "stop everything" above it.
Data acquisition vendors will often supply filtering solutions with their ADC boards for exactly this reason.
Long way to say: "That's how digitization works". But it's true - that is how digitization works.
Typically, one low-pass filters the signal to below half the sample rate before sampling. Otherwise, after sampling, there is usually no way to separate any aliased high frequency noise (your high order harmonics) from the more useful spectrum below (Nyquist) half the sample rate.
If you don't filter the signal before sampling it, the defect is inherent in the sample vector, not the FFT.