I have a 1D accelerometer signal (one axis only). I would like to create a robust algorithm, which would be able to recognize some shapes in the signal.
At first I apply a moving average filter to the raw signal. On the attached picture the raw signal is coloured red and the averaged signal is black. As seen from the picture, some trends are visible from the averaged (black) signal - the signal contains 10 repetitions of a peak like pattern, where acceleration climbs to a maximum and then drops back down. I have marked the beginnings and endings of those patterns with a cross.
So my goal is to find the marked positions automatically. The problem making the pattern extraction difficult are:
the start of the pattern could have a different y value than the end of the pattern
the pattern could have more than one peak
I do not have any concrete time information (from start to the end of the pattern it takes A time units)
I have tried different approaches, which are pretty much home-brew, so I won't mention them - I don't want you to be biased by my way of thinking. Are there some standard or by the books approaches for doing that kind of pattern extraction? Or maybe does anyone know how to tackle the problem in a robust way?
Any idea will be appreciated.
Keep it simple!
It appears the moving average is a good enough damper device; keep it as-is, maybe only increasing or decreasing its sample count if you notice that it either leaves too much noise or removes too much signal respectively. You then work off the this averaged signal exclusively.
The pattern markers you seek seems relatively easy to detect. Expressed in English, these markers are:
Targets = the points of inflection in the averaged readings curve, when the slope goes markedly negative to positive.
You should therefore be able to detect this situation by comparison of the slope values, calculated along with the moving average, as each new reading value comes available (of course with a short delay, as of course the slope at a given point can only be calculated when the averaged reading for the next [few] point[s] is available)
To avoid false detection, however, you'd need to define a few parameters aimed at filtering the undesirable patterns. These paremeters will define more precisely the meaning of "markedly" in the above target definition.
Tentatively the formula for detecting a point of interest could be as simple as this
(-1 * S(t-1) + St ) > Min_delta_Slope
where
S is the slope (more on this) at time t-1 and t, respectively
Min_delta_Slope is a parameter defining how "sharp" a change in slope we want at a minimum.
Assuming normalized t and Y units, we can set the Min_delta_Slope parameter close to or even past 1. Intuitively a value of 1 (again in normalized units) would indicate that we target points where the curved "arrived" with a downward slope of say 50% and left the point with a upward slope of 50% (or 40% + 60% or .. 10% i.e almost flat and 90% i.e. almost vertical).
To avoid detecting points in the case when this is merely a small dip in the curve, we can take more points into consideration, with a fancier formula such as say
(Pm2 * S(t-2) + Pm1 * S(t-1) + P0 * St + Pp1 S(t+1) ) > Min_delta_Slope
where
Pm2, Pm1, P0 and Pp1 are coefficients giving relative importance to the slope at various point before and after the point of interest. (Pm2 and Pm1 typically negative values unless we use only positive parameter and use negative signs in the formula)
St +/- n is the slope a various times
and Min_delta_Slope is a parameter defining how "sharp" a change in slope we want at a minimum.
Intuitively, this 4 points formula would take into account the shape of the curve at a point two readings prior and two reading past the point of interest (in addition to considering the point right before and after it). Given the proper values for the parameters, the formula would require that the curve be steadily coming "down" for two time slices, then steadily going up for the next two time slices, hence avoiding to mark smaller dips in the curve.
An alternative way to achieve this, may be to compute the slope by using the difference in Y value between the [averaged] reading from two (or more) time slices ago and that of the current [averaged] reading. These two approaches are similar but would produce slightly different result; generally we'd have more say on the desired shape of the curve with the Pm2, Pm1, P0 and P1 parameters.
You might want to look at watershed segmentation, which does a related kind of thing (dividing landscapes into their separate catchment basins). Oddly enough, I'm actually writing a PhD thesis which uses watershed a lot at the moment (seriously :))
Related
I am trying to build a receiver operating characteristic (ROC) curves to evaluate the discriminating ability of my classifier to correctly classify diseased and non-diseased subjects.
I understand that the closer the curve follows the left-hand border and then the top border of the ROC space, the more accurate the test. My experiments gave me quite desirable value of area under curve (auc), i.e. 0.86458. However, the ROC curve (in which I included the cut-off points for tracing purposes) seems quite strange as it gave me straight lines as below:
... and not a curve I expected and as I normally see from any references like this:
Does it hav something to do with the number of observations used? (in this case I only have 50 samples). Or is this just fine as long as the the auc value is high and that the 'curve' comes above the 45-degree diagonal of the ROC space? I would be glad if someone can share their thoughts about it. Thank you!
By the way, I used the perfcurve() function in matlab:
% ROC comparison between the proposed approach and the baseline
[X1,Y1,T1,auc1,OPTROCPT1,SUBY5,SUBYNAMES1] = perfcurve(testLabel,predlabel_prop,1);
[X2,Y2,T2,auc2,OPTROCPT2,SUBY2,SUBYNAMES2] = perfcurve(testLabel,predLabel_base,1);
figure;
plot(X1,Y1,'-r*',X2,Y2,'--ko');
legend('proposed approach','baseline','Location','east');
xlabel('False positive rate'); ylabel('True positive rate')
title('ROC comparison of the proposed approach and the baseline')
text(0.6,0.3,{'* - proposed method',strcat('Area Under Curve = ',...
num2str(auc1))},'EdgeColor','r');
text(0.6,0.15,{'o - baseline',strcat('Area Under Curve = ',num2str(auc2))},'EdgeColor','k');
You probably have too litte data.
You curve indicates your data set has 13 negative and 5 positive examples (in your test set?)
Furthermore, all but 4 have exactly the same score (maybe 0)? Or is that your cutoff?
Given this small sample size, I would not accept the hypothesis that your proposed method is better than the baseline, but accept the alternative - the methods perform as good as the other: the difference of 0.04 is much too small for this tiny sample size, the results are virtually identical. Any variation within the cut-off area (the diagonal part) can be much larger than this 0.04... On a different run, a different test set, the results may be the other way around.
Shape of your curve is just a result of high explanatory power of your model and limited number of observations (e.g. take a look at the example here http://nl.mathworks.com/help/stats/perfcurve.html).
I am trying to measure the PSD of a stochastic process in matlab, but I am not sure how to do it. I have posted the exact same question here, but I thought I might have more luck here.
The stochastic process describes wind speed, and is represented by a vector of real numbers. Each entry corresponds to the wind speed in a point in space, measured in m/s. The points are 0.0005 m apart. How do I measure and plot the PSD? Let's call the vector V. My first idea was to use
[p, w] = pwelch(V);
loglog(w,p);
But is this correct? The thing is, that I'm given an analytical expression, which the PSD should (in theory) match. By plotting it with these two lines of code, it looks all wrong. Specifically it looks as though it could need a translation and a scaling. Other than that, the shapes of the two are similar.
UPDATE:
The image above actually doesn't depict the PSD obtained by using pwelch on a single vector, but rather the mean of the PSD of 200 vectors, since these vectors stems from numerical simulations. As suggested, I have tried scaling by 2*pi/0.0005. I saw that you can actually give this information to pwelch. So I tried using the code
[p, w] = pwelch(V,[],[],[],2*pi/0.0005);
loglog(w,p);
instead. As seen below, it looks much nicer. It is, however, still not perfect. Is that just something I should expect? Taking the squareroot is not the answer, by the way. But thanks for the suggestion. For one thing, it should follow Kolmogorov's -5/3 law, which it does now (it follows the green line, which has slope -5/3). The function I'm trying to match it with is the Shkarofsky spectral density function, which is the one-dimensional Fourier transform of the Shkarofsky correlation function. Is it not possible to mark up math, here on the site?
UPDATE 2:
I have tried using [p, w] = pwelch(V,[],[],[],1/0.0005); as I was suggested. But as you can seem it still doesn't quite match up. It's hard for me to explain exactly what I'm looking for. But what I would like (or, what I expected) is that the dip, of the computed and the analytical PSD happens at the same time, and falls off with the same speed. The data comes from simulations of turbulence. The analytical expression has been fitted to actual measurements of turbulence, wherein this dip is present as well. I'm no expert at all, but as far as I know the dip happens at the small length scales, since the energy is dissipated, due to viscosity of the air.
What about using the standard equation for obtaining a PSD? I'd would do this way:
Sxx(f) = (fft(x(t)).*conj(fft(x(t))))*(dt^2);
or
Sxx = fftshift(abs(fft(x(t))))*(dt^2);
Then, if you really need, you may think of applying a windowing criterium, such as
Hanning
Hamming
Welch
which will only somehow filter your PSD.
Presumably you need to rescale the frequency (wavenumber) to units of 1/m.
The frequency units from pwelch should be rescaled, since as the documentation explains:
W is the vector of normalized frequencies at which the PSD is
estimated. W has units of rad/sample.
Off the cuff my guess is that the scaling factor is
scale = 1/0.0005/(2*pi);
or 318.3 (m^-1).
As for the intensity, it looks like taking a square root might help. Perhaps your equation reports an intensity, not PSD?
Edit
As you point out, since the analytical and computed PSD have nearly identical slopes they appear to obey similar power laws up to 800 m^-1. I am not sure to what degree you require exponents or offsets to match to be satisfied with a specific model, and I am not familiar with this particular theory.
As for the apparent inconsistency at high wavenumbers, I would point out that you are entering the domain of very small numbers and therefore (1) floating point issues and (2) noise are probably lurking. The very nice looking dip in the computed PSD on the other hand appears very real but I have no explanation for it (maybe your noise is not white?).
You may want to look at this submission at matlab central as it may be useful.
Edit #2
After inspecting documentation of pwelch, it appears that you should pass 1/0.0005 (the sampling rate) and not 2*pi/0.0005. This should not affect the slope but will affect the intercept.
The dip in PSD in your simulation results looks similar to aliasing artifacts
that I have seen in my data when the original data were interpolated with a
low-order method. To make this clearer - say my original data was spaced at
0.002m, but in the course of cleaning up missing data, trying to save space, whatever,
I linearly interpolated those data onto a 0.005m spacing. The frequency response
of linear interpolation is not well-behaved, and will introduce peaks and valleys
at the high wavenumber end of your spectrum.
There are different conventions for spectral estimates - whether the wavenumber
units are 1/m, or radians/m. Single-sided spectra or double-sided spectra.
help pwelch
shows that the default settings return a one-sided spectrum, i.e. the bin for some
frequency ω will include the power density for both +ω and -ω.
You should double check that the idealized spectrum to which you are comparing
is also a one-sided spectrum. Otherwise, you'll need to half the values of your
one-sided spectrum to get values representative of the +ω side of a
two-sided spectrum.
I agree with Try Hard that it is the cyclic frequency (generally Hz, or in this case 1/m)
which should be specified to pwelch. That said, the returned frequency vector
from pwelch is also in those units. Analytical
spectral formulae are usually written in terms of angular frequency, so you'll
want to be sure that you evaluate it in terms of radians/m, but scale back to 1/m
for plotting.
In my project i have hige surfaces of 20.000 points computed by a algorithm. This algorithm, sometimes, has an error, computing 1 or more points in an small area incorrectly.
This error can not be solved in the algorithm, but needs to be detected afterwards.
The error can be seen in the next figure:
As you can see, there is a point wrongly computed that not only breaks the full homogeneous surface, but also destroys the aestetics of the plot (wich is also important in the project.)
Sometimes it can be more than a point, in general no more than 5 or 6. The error is allways the Z axis, so no need to check X and Y
I have been squeezing my mind to find a bit "generic" algorithm to detect this poitns.
I thougth that maybe taking patches of surface and meaning the Z, then detecting the points out of the variance... but I dont think it will work allways.
Any ideas?
NOTE: I dont want someone to write code for me, just an idea.
PD: relevant code for the avobe image:
[x,y] = meshgrid([-2:.07:2]);
Z = x.*exp(-x.^2-y.^2);
subplot(1,2,1)
surf(x,y,Z,gradient(Z))
subplot(1,2,2)
Z(35,35)=Z(35,35)+0.3;
surf(x,y,Z,gradient(Z))
The standard trick is to use a Laplacian, looking for the largest outliers. (This is not unlike what Mohsen posed for an answer, but is actually a bit easier.) You could even probably do it with conv2, so it would be pretty efficient.
I could offer a few ways to implement the idea. A simple one is to use my gridfit tool, found on the File Exchange. (Gridfit essentially uses a Laplacian for its smoothing operation.) Fit the surface with all points included, then look for the single point that was perturbed the most by the fit. Exclude it, then rerun the fit, again looking for the largest outlier. (With gridfit, you can use weights to give points a zero weight, a simple way to exclude a point or list of points.) When the largest perturbation that was needed is small enough, you can decide to stop the process. A nice thing is gridfit will also impute new values for the outliers, filling in all of the holes.
A second approach is to use the Laplacian directly, in more of a filtering approach. Here, you simply compute a value at each point that is the average of each neighbor to the left, right, above, and below. The single value that is most largely in disagreement with its computed average is replaced with a new value. Or, you can use a weighted average of the new value with the old one there. Again, iterate until the process does not generate anything larger than some tolerance. (This is the basis of an old outlier detection and correction scheme that I recall from the Fortran IMSL libraries, but probably dates back to roughly 30 years ago.)
Since your functions seems to vary smoothly these abrupt changes can be detected by looking into the derivatives. You can
Take the derivative in one direction
Calculate mean and standard deviation of derivative
Find the points by looking for points that are further from mean by certain multiple of standard deviation.
Here is the code
U=diff(Z);
V=(U-mean(U(:)))/std(U(:));
surf(x(2:end,:),y(2:end,:),V)
V=[zeros(1,size(V,2)); V];
V(abs(V)<10)=0;
V=sign(V);
W=cumsum(V);
[I,J]=find(W);
outliers = [I, J];
For your example you get this plot for V with a peak at around 21.7 while second peak is at around 1.9528, so maybe a threshold of 10 is ok.
and running the code returns
outliers =
35 35
The need for cumsum is for the cases that you have a patch of points next to each other that are incorrect.
Is there some process that can determine / remove an unknown DC offset from a non-periodic discrete time signal?
The signal in in question has a sample rate of 25Hz and has harmonics of interest between 0.25 and 3 Hz.
I have tried using highpass filters mixed results, first I used a 10th order guassian with Fc = 0Hz, this did a good job of removing the offset but it severly attenuated the AC aswell although it did leave the signal shape intact, next I used a 168th order equilripple with a stopband at 0Hz and passband at 0.25Hz, the phase shift was too severe and the signal shape too distorted, the distortion could probably be reduced if the pass-band was brought down to 0.1Hz but this would just further increase the phase shift which I need to keep to the very minimum.
Before and after applying x - LPF(x), as suggested by Paul R
I recommend using a notch filter at DC and using filtfilt to make it zero phase.
a = [1 , -0.98]; b = [1,-1];
y = filtfilt(b,a,x);
The closer the second value of a gets to -1 the narrower your notch will be.
A DC offset means that some constant value was added to the signal (the name originates from adding a DC voltage to an analog AC signal). If the DC component is really constant (and not changing really slowly), then you don't have to design some high-order (and potentially unstable) high-pass filters - you can just subtract the average of your signal from the signal - which is, of course, a high-pass filter as well (averaging is a type of a low-pass, and '1 minus the average' is high-apss) --- but a very simple one.
If, on the other hand, you have a reason to believe that the DC component is not really a DC, but rather an AC with very low frequency, then you'd better average segments of your signal and not the signal as a whole, which is the same as using a low-pass filter with impulse response which is shorter then the length of the signal. In this case you have to make some assumptions about the "DC" component.
Rather than implementing a high pass filter directly (which can be rather tricky for very low frequencies - you end up with a large number of coefficients and various issues with stability and passband ripple etc), you might instead want to consider implementing a low pass filter which will give you an estimate of the DC offset value, and then subtract this filtered offset from your signal, i.e. rather than:
y = HPF(x)
do this:
y = x - LPF(x)
The low pass filter can probably just be quite a simple filter with a relatively small number of terms. The big advantage of this implementation is that your higher frequency components should not have any unwanted artefacts due to phase, ripple, etc, since all you are doing is subtracting an almost stationary DC value from the samples.
The only potential downside is that if the DC offset is large you may have quite a long initial settling time before the estimate of the DC offset is accurate (although this is also true of any other implementation such as a direct high pass filter of course). If you have any a priori knowledge of what the offset value is likely to be (e.g. if it doesn't change very much from run to run, and you know the value from the previous run) then you can use this to optimise the settling time, by initialising the LPF state variables to a suitable value rather than 0.
As others have said, to remove a DC offset, you can simply subtract the mean. Your signal does not need to be periodic, but it does need to be long enough to get a good estimate of the DC component.
If you still wish to go with a filtering approach, you can eliminate the severe distortion due to phase lag by using filtfilt. This function filters your timeseries once in the forwards direction and then once in the reverse direction, so that phase distortions cancel out.
You can design a symmetric FIR filter as the low-pass filter that estimates the DC and then subtract the output from your input signal. This filter has constant group-delay.
I am trying to compare two data sets in MATLAB. To do this I need to filter the data sets by Fourier transforming the data, filtering it and then inverse Fourier transforming it.
When I inverse Fourier transform the data however I get a spike at either end of the red data set (picture shows the first spike), it should be close to zero at the start, like the blue line. I am comparing many data sets and this only happens occasionally.
I have three questions about this phenomenon. First, what may be causing it, secondly, how can I remedy it, and third, will it affect the data further along the time series or just at the beginning and end of the time series as it appears to from the picture.
Any help would be great thanks.
When using DFT you must remember the DFT assumes a Periodic Signal (As a Superposition of Harmonic Functions).
As you can see, the start point is exact continuation of the last point in harmonic function manner.
Did you perform any Zero Padding in the Spectrum Domain?
Anyhow, Windowing might reduce the Overshooting.
Knowing more about the filter and the Original data might be helpful.
If you say spike near zero frequencies, I answer check the DC component.
You seem interested by the shape, so doing
x = x - mean(x)
or
x -= mean(x)
or
x -= x.mean()
(I love numpy!)
will just constrain the dataset to begin with null amplitude at zero-frequency and to go ahead with comapring the spectra's amplitude.
(as a side-note: did you check that you approprately use fftshift and ifftshift? this has always been the source of trouble for me)
Could be the numerical equivalent of Gibbs' phenomenon. If that's correct, there's no way to remedy it except for filtering.