how to find the similarity between two curves and the score of similarity? - matlab

I have two data sets (t,y1) and (t,y2). These data sets visually look same but their is some time delay or magnitude shift. i want to find the similarity between the two curves (giving the score of similarity 1 for approximately similar curves and 0 for not similar curves). Some curves are seem to be different because of oscillation in data. so, i am searching for the method to find the similarity between the curves. i already tried gradient command in Matlab to find the slope of the curve at each time step and compared it. but it is not giving me satisfactory results. please anybody suggest me the method to find the similarity between the curves.
Thanks in Advance

This answer assumes your y1 and y2 are signals rather than curves. The latter I would try to parametrise with POLYFIT.
If they really look the same, but are shifted in time (and not wrapped around) then you can:
y1n=y1/norm(y1);
y2n=y2/norm(y2);
normratio=norm(y1)/norm(y2);
c=conv2(y1n,y2n,'same');
[val ind]=max(c);
ind will indicate the time shift and normratio the difference in magnitude.
Both can be used as features for your similarity metric. I assume however your signals actually vary by more than just timeshift or magnitude in which case some sort of signal parametrisation may be a better choice and then building a metric on those parameters.
Without knowing anything about your data I would first try with AR (assuming things as typical as FFT or PRINCOMP won't work).

For time series data similarity measurement, one traditional solution is DTW (Dynamic Time Warpping)

Kolmongrov Smirnov Test (kstest2 function in Matlab)
Chi Square Test
to measure similarity there is a measure called MIC: Maximal information coefficient. It quantifies the information shared between 2 data or curves.

The dv and dc distance in the following paper may solve your problem.
http://bioinformatics.oxfordjournals.org/content/27/22/3135.full

Related

How to convert scales to frequencies in Wavelet Transform

I'm dealing with CWT, and I have a big problem converting scales to frequencies. In the MAtlab Wavelet Tutorial they use this expression to convert scales to frequencies
But if i use the default function scal2freq I obtain different result.
I don't understand the role of the Morlet Fourier Factor
Thanks in advance
It is a pretty complicated concept, which I somewhat understand it. I'll write some points here so that you might figure it out yourself, rather easier.
A simple fact is that:
Scale is inversely proportional to frequency.
For example, imagine we have a 1-100 Hz range of frequencies in some time series data such as stock markets data or earthquake data. Scale is "supposed to be" the inverse of that. For instance, if scale would be in range of 1 to 100, we'd have had:
Scale(1/Hz) Frequency (Hz)
1 100
50 50
100 1
Therefore,
The frequency is not the real frequency of those time series data (e.g., stock market, earthquake) that we know of. They are only related, inversely.
And we can safely say that here we are calculating some "pseudo-frequencies", which MATLAB does that (by approximating that). You can read about the approximation process in the documentation in the section pseudo-frequencies:
MATlAB does calculate those pseudo-frequencies based on:
In wavelet analysis, the way to relate scales to frequencies is to determine the center frequency of the wavelet function:
which you can visually see in this image and of-course it would differ, when we would change the types of our function in the calculation. Thus, that center frequency will change everytime in our approximation process:
That "MorletFourierFactor" is a variable to approximate a constant so that when you would do the 1/scale, it would closely approximate those "pseudo-frequencies".
I thought this image about shifting (time axis) and scaling (frequency axis) might be a little helpful to look into as well:
The bottom line is that don't worry about pseudo-frequencies, you wouldn't probably need those. If you would want any frequency spectrum, you can likely go towards applying some of those frequency methods (such as Fast Fourier Transform) on whatever time series data that you have.
If you really really want to map that, you can also try to design some methods to approximate it yourself.
Source
Harvard Seismology

Resampling data with minimal loss of information in time-domain

I am trying to resample/recreate already recorded data for plotting purposes. I thought this is best place to ask the question (besides dsp.se).
The data is sampled at high frequency, contains to much data points and not suitable for plotting in time domain (not enough memory). i want to sample it with minimal loss. The sampling interval of the resulting data doesn't need to be same (well it is again for plotting purposes, not analysis) although input data in equally sampled.
When we use the regular resample command from matlab/octave, it can distort stiff pieces of the curve.
What is the best approach here?
For reference I put two pictures found in tex.se)
First image is regular resample
Second image is a better resampled data that can well behave around peaks.
You should try this set of files from the File Exchange. It computes optimal lookup table based on either the maximum set of points or a given error. You can choose from natural, linear, or spline for the interpolation methods. Spline will have the smallest table size but is slower than linear. I don't use natural unless I have a really good reason.
Sincerely,
Jason

Measuring density for three dimensional data (in Matlab)

I have a dataset consisting of a large collection of points in three dimensional euclidian space. In this collection of points, i am trying to find the point that is nearest to the area with the highest density of points.
So my problem consists of two steps:
1: Determine where density of the distribution of points is at its highest
2: Determine which point is nearest to the point found in 1
Point 2 i can manage, but i'm not sure how to solve point 1. I know there are a lot of functions for density estimation in Matlab, but i'm not sure which one would be the most suitable, or straightforward to use.
Does anyone know?
My command of statistics is a little bit rusty, but as far as i can tell, this type of problem calls for multivariate analysis. Someone suggested i use multivariate kernel density estimation, but i'm not really sure if that's the best solution.
Density is a measure of mass per unit volume. On the assumption that your points all have the same mass then you are, I suppose, trying to measure the number of points per unit volume. So one approach is to divide your subset of Euclidean space into lots of little unit volumes (let's call them voxels like everyone does) and count how many points there are in each one. The voxel with the most points is where the density of points is at its highest. This is, of course, numerical integration of a sort. If your points were distributed according to some analytic function (and I guess they are not) you could solve the problem with pencil and paper.
You might make this approach as sophisticated as you like, perhaps initially dividing your space into 2 x 2 x 2 voxels, then choosing the voxel with most points and sub-dividing that in turn until your criteria are satisfied.
I hope this will get you started on your point 1; you seem to be OK with point 2 so I'll stop now.
EDIT
It looks as if triplequad might be what you are looking for.

Designing a simple bandpass/bandstop filter in Matlab

For a homework assignment I have to design a simple bandpass filter in Matlab that filters out everything between 250Hz and 1000 Hz. What I did so far:
- using the 'enframe' function to create half overlapping windows with 512 samples each. On the windows I apply the hann window function.
- On each window I apply an fft. After this I reconstruct the original signal with the function ifft, that all goes well.
But the problem is how I have to interpret the result of the fft function and how to filter out a frequency band.
Unless I'm mistaken, it sounds like you're taking the wrong approach to this.
If your assignment is to manipulate a signal specifically by manipulating its FFT then ignore me. Otherwise.. read on.
The FFT is normally used to analyse a signal in the frequency domain. If you start fiddling with the complex coefficients that an FFT returns then you're getting into a complicated mathematical situation. This is particularly the case since your cut-off frequencies aren't going to lie nicely on FFT bin frequencies. Also, remember that the FFT is not a perfect transform of the signal you're analysing. It will always introduce artefacts of its own due to scalloping error, and convolution with your hann window.
So.. let's leave the FFT for analysis, and build a filter.
If you're doing band-pass design in your class I'm going to assume you understand what they do. There's a number of functions in Matlab to generate the coefficients for different types of filter i.e. butter, kaiser cheby1. Look up their help pages in Matlab for loads more info. The values you plug in to these functions will be dependent on your filter specification, i.e. you want "X"dB rolloff and "Y"dB passband ripple. You'll need some idea of the how these filters work, and knowledge of their transfer functions to understand how their filter order relates to your specification.
Once you have your coefficients, it's just a case of running them through the filter function (again.. check the help page if you're not sure how this works).
The mighty JOS has a great walkthrough of bandpass filter design here.
One other little niggle.. in your question you mentioned that you want your filter to "filter out" everything between 250Hz and 1000Hz. This is a bit ambiguous. If you're designing a bandpass filter you would want to "pass" everything between 250Hz and 1000Hz. If you do in fact want to "filter out" everything in this range you want a band-stop filter instead.
It all depends on the sampling rate you use.
If you sample right according to the Nyquist-Shannon sampling theorem then you can try and interpret the samples of your fft using the definition of the DFT.
For understanding which frequencies correspond with which samples in the dft results, I think it's best to look at the inverse transformation. You multiply coefficient k with
exp(i*2*pi*k/N*n)
which can be interpreted to be a cosine with Euler's Formula. So each coefficient gets multiplied by a sine of a certain frequency.
Good luck ;)

How can I perform K-means clustering on time series data?

How can I do K-means clustering of time series data?
I understand how this works when the input data is a set of points, but I don't know how to cluster a time series with 1XM, where M is the data length. In particular, I'm not sure how to update the mean of the cluster for time series data.
I have a set of labelled time series, and I want to use the K-means algorithm to check whether I will get back a similar label or not. My X matrix will be N X M, where N is number of time series and M is data length as mentioned above.
Does anyone know how to do this? For example, how could I modify this k-means MATLAB code so that it would work for time series data? Also, I would like to be able to use different distance metrics besides Euclidean distance.
To better illustrate my doubts, here is the code I modified for time series data:
% Check if second input is centroids
if ~isscalar(k)
c=k;
k=size(c,1);
else
c=X(ceil(rand(k,1)*n),:); % assign centroid randomly at start
end
% allocating variables
g0=ones(n,1);
gIdx=zeros(n,1);
D=zeros(n,k);
% Main loop converge if previous partition is the same as current
while any(g0~=gIdx)
% disp(sum(g0~=gIdx))
g0=gIdx;
% Loop for each centroid
for t=1:k
% d=zeros(n,1);
% Loop for each dimension
for s=1:n
D(s,t) = sqrt(sum((X(s,:)-c(t,:)).^2));
end
end
% Partition data to closest centroids
[z,gIdx]=min(D,[],2);
% Update centroids using means of partitions
for t=1:k
% Is this how we calculate new mean of the time series?
c(t,:)=mean(X(gIdx==t,:));
end
end
Time series are usually high-dimensional. And you need specialized distance function to compare them for similarity. Plus, there might be outliers.
k-means is designed for low-dimensional spaces with a (meaningful) euclidean distance. It is not very robust towards outliers, as it puts squared weight on them.
Doesn't sound like a good idea to me to use k-means on time series data. Try looking into more modern, robust clustering algorithms. Many will allow you to use arbitrary distance functions, including time series distances such as DTW.
It's probably too late for an answer, but:
k-means can be used to cluster longitudinal data
Anony-Mousse is right, DWT distance is the way to go for time series
The methods above use R. You'll find more methods by looking, e.g., for "Iterative Incremental Clustering of Time Series".
I have recently come across the kml R package which claims to implement k-means clustering for longitudinal data. I have not tried it out myself.
Also the Time-series clustering - A decade review paper by S. Aghabozorgi, A. S. Shirkhorshidi and T. Ying Wah might be useful to you to seek out alternatives. Another nice paper although somewhat dated is Clustering of time series data-a survey by T. Warren Liao.
If you did really want to use clustering, then dependent on your application you could generate a low dimensional feature vector for each time series. For example, use time series mean, standard deviation, dominant frequency from a Fourier transform etc. This would be suitable for use with k-means, but whether it would give you useful results is dependent on your specific application and the content of your time series.
I don't think k-means is the right way for it either. As #Anony-Mousse suggested you can utilize DTW. In fact, I had the same problem for one of my projects and I wrote my own class for that in Python. The logic is;
Create your all cluster combinations. k is for cluster count and n is for number of series. The number of items returned should be n! / k! / (n-k)!. These would be something like potential centers.
For each series, calculate distances for each center in each cluster groups and assign it to the minimum one.
For each cluster groups, calculate total distance within individual clusters.
Choose the minimum.
And, the Python implementation is here if you're interested.