Calculate SNR of drifting signal in MATLAB - matlab

I have a signal that is both noisy and drifts. I want to calculate the noise of the signal, but I think for this drift should not be taken into account as "noise". using the snr() funciontion in matlab will give me a really high noise value, I think because it takes into account the drift, right?
How can I calculate it? is there any function available for this?
In this picture, for instance, the noise should be around 2% right? ((22.45-22.36)/2)/22.38. (although what I really want is the SNR value)
Thank you!
Filtered signal with low pass filter with a really low frequency:

I would approach this by identifying the drift of the signal with a low pass filter. Just subtract the filtered signal from the original signal. This will lead to noise signal with low drift.
Filtering the signal might be the most difficult task, but by playing around with the filter parameters this will work

Related

Low pass filter with clipped data

I have a set of data that basically consist of one low frequency component and one high frequency component, where the low frequency is what I would like to recover. This to me seems like a perfect use case for a low pass filter, however, a problem arises since the data is clipped.
As the clipped points basically are constants for short intervals, they will add some low frequency junk which disturbs the signal of interest. I have tried getting around the problem by simply omitting the points subject to clipping, but this method seems slightly naive, is there a better way?
I have included a few figures which shows simulated data to illustrate what I am working with.
Typical signal, starts with values close to zero and then both the low frequency as well as high frequency signal kicks in simultaniously.
Running the high frequency signal through a low pass filter yields the following results. Note the difference between having clipping in the data and without.
The signal after lowpass filtering. Note the difference between when no clipping is present and when there is.
When filtering the data, I use Matlabs built in function fir1, using the following call:
Signal_lowpass = filter(fir1(100, fc, 'low'), 1, Signal);
All the plots that you have shown are time-domain representation of your signals. Here it would help if you show the frequency response (magnitude response from fft should suffice) of your clipped signal and also the frequency response of low pass filtered signal. From frequency response of your signal one can design a filter which would eliminate clipping effects as well as the high pass signal. If your low pass signal is a single tone (which it looks like from time domain graphs) a bandpass filter around its frequency would help to extract it.

FFT in Matlab in order to find signal frequency and create a graph with peaks

I have data from an accelerometer and made a graph of acceleration(y-axis) and time (x-axis). The frequency rate of the sensor is arround 100 samples per second. but there is no equally spaced time (for example it goes from 10.046,10.047,10.163 etc) the pace is not const. And there is no function of the signal i get. I need to find the frequency of the signal and made a graph of frequency(Hz x-axis) and acceleration (y-axis). but i don't know which code of FFT suits my case. (sorry for bad english)
Any help would be greatly appreciated
For an FFT to work you will need to reconstruct the signal you have with with a regular interval. There are two ways you can do this:
Interpolate the data you already have to make an accurate guess at where the signal would be at a regular interval. However, this FFT may contain significant inaccuracies.
OR
Adjust the device reading from the accelerometer incorporate an accurate timer such that results are always transmitted at regular intervals. This is what I would recommend.

fft artificial defects due to finite sampling frequency

I use Matlab to calculate the fft result of a time series data. The signal has an unknown fundamental frequency (~80 MHz in this case), together with several high order harmonics (1-20th order). However, due to finite sampling frequency (500 MHz in this case), I always get the mixing frequencies from high order frequency (7-20), e.g. 7th with a peak at abs(2*500-80*7)=440 MHz, 8th with frequency 360 MHz and 13th with a peak at abs(13*80-2*500)=40 MHz. Does anyone know how to get rid of these artificial mixing frequencies? One possible way is to increase the sampling frequency to sufficient large value. However, my data set has fixed number of data and time range. So the sampling frequency is actually determined by the property of the data set. Any solutions to this problem?
(I have image for this problem but I don't have enough reputation to post a image. Sorry for bring inconvenience for understanding this question)
You are hitting on a fundamental property of sampling - when you sample data at a fixed frequency fs, you cannot tell the difference between two signals with the same amplitude but different frequencies, where one has f1=fs/2 - d and the other has f2=f2/2 + d. This effect is frequently used to advantage - for example in mixers - but at other times, it's an inconvenience.
Unless you are looking for this mixing effect (done, for example, at the digital receiver in a modern MRI scanner), you need to apply a "brick wall filter" with a cutoff frequency of fs/2. It is not uncommon to have filters with a roll-off of 24 dB / octave or higher - in other words, they let "everything through" below the cutoff, and "stop everything" above it.
Data acquisition vendors will often supply filtering solutions with their ADC boards for exactly this reason.
Long way to say: "That's how digitization works". But it's true - that is how digitization works.
Typically, one low-pass filters the signal to below half the sample rate before sampling. Otherwise, after sampling, there is usually no way to separate any aliased high frequency noise (your high order harmonics) from the more useful spectrum below (Nyquist) half the sample rate.
If you don't filter the signal before sampling it, the defect is inherent in the sample vector, not the FFT.

How to decide to cuttoff frequecies of filter in case of using ADC( Flow: Analog-signal to ADC to bits to fir_filter to filtered_output)

FIR filter has to be used for removing the noise.
I don't know the frequencies of the noise that might be adding up into the analog feedback signal I am taking.
My apparatus consists analog feedback signal then i am using ADC to digitize the value now I have to apply FIR filter to remove the noise, Now I am not sure which noise the noise which added up in the analog signal from the environment or some sort of noise comes there due to ADC ?
I have to code this in vhdl.(this part is easy I can do that).
My main problem is in deciding the frequencies.
Thanks in Advance !
I am tagging vhdl as some people who are working in vhdl might know about the filter.
Let me start by stating the obvious: An ADC samples at a fixed rate and can not represent any frequency higher than the Nyquist frequency
Step one: understand aliasing, and that any frequency higher than the Nyquist will alias into your signal as noise. Once you get this you understand that you need an anti aliasing filter in your hardware, in your analog signal path before you digitize it. Depending on the noise requirements of the application you may implement a very complicated 4 pole filter using op-amps; the simplest is to use an RC filter.
Step two: setting the filter cut off. Don't set the cutoff right at the Nyquist frequency, make sure the filter is cutting well before the nyquist (1/2x... 1/10x, depends really how clean and how much noise is present)
So now you're actually kind of over sampling your signal: The filter is cutting above your signal, and the sample rate is high enough such that the Nyquist frequency is sufficiently higher. Over sampling is kind of extra data, that you captured with the intent of filtering further, and possibly even decimating (keeping on in N samples and throwing the rest out)
Step three: use a filter to further remove the noise between the initial cut off of the anti-aliasing filter and the nyquist frequency. This is a science on it's own really, but let me start by suggesting a good decimation filter: Averaging 2 values. It's a box-car filter of order 2, also known as a SINC filter, and can be re applied N times. After N times it is the equivalent of an FIR using the values of the Nth row in pascal's triangle (and divided by their sum).
Again, the filter choice is a science on it's own really. To the extreme is the decimation filters of a sigma-delta ADC. The CS5376A datasheet clearly explains what they're doing; I learn quite a bit just from reading that datasheet!

Trying to filter (tons of) noise from accelerometers and gyroscopes

My project:
I'm developing a slot car with 3-axis accelerometer and gyroscope, trying to estimate the car pose (x, y, z, yaw, pitch) but I have a big problem with my vibration noise (while the car is running, the gears induce vibration and the track also gets it worse) because the noise takes values between ±4[g] (where g = 9.81 [m/s^2]) for the accelerometers, for example.
I know (because I observe it), the noise is correlated for all of my sensors
In my first attempt, I tried to work it out with a Kalman filter, but it didn't work because values of my state vectors had a really big noise.
EDIT2: In my second attempt I tried a low pass filter before the Kalman filter, but it only slowed down my system and didn't filter the low components of the noise. At this point I realized this noise might be composed of low and high frecuency components.
I was learning about adaptive filters (LMS and RLS) but I realized I don't have a noise signal and if I use one accelerometer signal to filter other axis' accelerometer, I don't get absolute values, so It doesn't work.
EDIT: I'm having problems trying to find some example code for adaptive filters. If anyone knows about something similar, I will be very thankful.
Here is my question:
Does anyone know about a filter or have any idea about how I could fix it and filter my signals correctly?
Thank you so much in advance,
XNor
PD: I apologize for any mistake I could have, english is not my mother tongue
The first thing i would do, would be to run a DFT on the sensor signal and see if there is actually a high and low frequency component of your accelerometer signals.
With a DFT you should be able to determine an optimum cutoff frequency of your lowpass/bandpass filter.
If you have a constant component on the Z axis, there is a chance that you haven't filtered out gravity. Note that if there is a significant pitch or roll this constant can be seen on your X and Y axes as well
Generally pose estimation with an accelerometer is not a good idea as you need to integrate the acceleration signals twice to get a pose. If the signal is noisy you are going to be in trouble already after a couple of seconds if the noise is not 100% evenly distributed between + and -.
If we assume that there is no noise coming from your gears, even the conversion accuracy of the Accelerometer might start to mess up your pose after a couple of minutes.
I would definately use a second sensor, eg a compass/encoder in combination with your mathematical model and combine all your sensor data in a kalmann filter(Sensor fusion).
You might also be able to derive a black box model of your noise by assuming that it is correlated with your motors RPM. (Box-jenkins/Arma/Arima).
I had similar problems with noise with low and high frequencies and I managed to decently remove it without removing good signal too by using an universal microphone shock mount. It does a good job with gyroscope too especially if you find one which fits it (or you can put it in a small case then mount it)
It basically uses elastic strings to remove shocks and vibration.
Have you tried a simple low-pass filter on the data? I'd guess that the vibration frequency is much higher than the frequencies in normal car acceleration data. At least in normal driving. Crashes might be another story...