EEG in matlab - Graph theory segmentation - matlab

i have an EEG dataset and I want to further examine it with Laplacian Eigenmaps. However, at the moment I want to find the local maxima and save into a new matrix all the different vectors that lie in-between two local maxima (see picture- i am looking for the black lines). I use the findpeaks function in Matlab and I get a matrix with the peaks but from there I do not know how to move on. Thanks in advance!

I am guessing a lot, but are you looking for something like:
%% some data
N = 4; % number of peaks
peakPositions = rand(N,2); % peak positions
%% difference vector matrix
diffMat = zeros(N*(N-1)/2,2);
actPos = 1;
for n = 1:N
diffMat(actPos:actPos+N-n-1,:) = ...
bsxfun(#minus, peakPositions(n+1:end,:), peakPositions(n));
actPos = actPos+N-n;
end
Example:
peakPositions =
0.2630 0.4505
0.6541 0.0838
0.6892 0.2290
0.7482 0.9133
diffMat =
0.3911 -0.1791
0.4262 -0.0340
0.4852 0.6504
0.0351 -0.4251
0.0941 0.2593
0.0589 0.2241

Related

K-nearest neighbourhood in a spcific range in MATLAB

I am dealing with k-neighbour problem in MATLAB. There is an image with row r and column c. And divide it into r*c blocks - each blcok represents a patch centered in each pixel.
And I want to find the k-nearest neighbourbood of each blcok within a specific search range. At first I use knnsearch with kdTree:
ns = createns(Block','nsmethod','kdtree');
[Index_nearest,dist] = knnsearch(ns,Block','k',k+1);
However, I find that it would find k-nearest neighbourhood in all blocks, instead of the specific range. Therefore, is there any other method to achieve the goal? Could anyone give me some hints? Thanks in advance!
Edit: the code for knnsearch:
function [Index_nearest, Weight] = Compute_Weight(Input, Options)
% Input the data and pre-processing
windowsize = Options.winsize;
k = Options.directionsize;
deviation = Options.deviation; % Deviation for Gaussian kernel
h = Options.h; % This parameter is for controling the weights
[r,c] = size(Input);
In_pad = padarray(Input, [windowsize windowsize], 'symmetric');
window_size = (2*windowsize+1)*(2*windowsize+1);
Block = zeros(window_size,r*c);
%% Split the input data into blocks
for i = 1:r
for j = 1:c
block = In_pad(i:i+2*windowsize,j:j+2*windowsize);
Block(:,r*(i-1)+j) = block(:); % expand from column to column
end
end
%% Find k-nearest neighbour blocks
% Create a KDtree with all local patches
ns = createns(Block','nsmethod','kdtree');
% Find the patches closest by in intensity in relation to the local patch itself
[Index_nearest,ddd] = knnsearch(ns,Block','k',k+1);
Index_nearest = Index_nearest';
Index_nearest = Index_nearest(2:k+1,:);
end

Wind rose diagram plot in matlab

I am currently trying to plot in matlab a wind rose diagram with data wind velocities and directions for a given period.
The main program is such that after plotting several plots on the Weibull distribution, it calls another matlab program to produce a wind rose.
The wind rose program is essentially here: https://www.mathworks.com/matlabcentral/fileexchange/47248-wind-rose
and the main program is mainly based on https://www.mathworks.com/matlabcentral/fileexchange/41996-computing-weibull-distribution-parameters-from-a-wind-speed-time-series?focused=3786165&tab=function
I've been working yesterday on a very old matlab edition and I had serious problems with making the code run.
Today with Octave in a Ubuntu machine and after some efforts I managed to get a result with a minor problem: the wind rose did not had all the information it should have.
I run the program in a new version of matlab and then I got the following message:
Error using WindRose (line 244)
is not a valid property for WindRose function.
Error in octavetestoforiginalprogram (line 184)
[figure_handle,count,speeds,directions,Table] = WindRose(dir,vel,Options);
How can the program run in Octave and now produces such an error, I don't understand.
Does anyone have an idea of what this error means?
Note: I am posting the entire code below if anyone wants to read it:
%% EXTRACT AND PLOT RAW DATA
% Extract wind speed data from a file
v = xlsread('1981-1985_timeseries.xlsx');
% Plot the measured wind speed
plot(v)
title('Wind speed time series');
xlabel('Measurement #');
ylabel('Wind speed [m/s]');
%% PROCESS DATA
% Remove nil speed data (to avoid infeasible solutions in the following)
v(find(v==0)) = [];
% Extract the unique values occuring in the series
uniqueVals = unique(v);
uniqueVals(isnan(uniqueVals))=[];
% Get the number of unique values
nbUniqueVals = length(uniqueVals);
% Find the number of occurences of each unique wind speed value
for i=1:nbUniqueVals
nbOcc = v(find(v==uniqueVals(i)));
N(i) = length(nbOcc);
end
% Get the total number of measurements
nbMeas = sum(N);
% To take into account the measurement resolution
% (i.e., a measured wind speed of 2.0 m/s may actually correspond to a
% real wind speed of 2.05 or 1.98 m/s), compute the delta vector which
% contains the difference between two consecutive unique values
delta(1) = uniqueVals(1);
for i=2:nbUniqueVals
delta(i) = uniqueVals(i) - uniqueVals(i-1);
end
% Get the frequency of occurence of each unique value
for i=1:nbUniqueVals
prob(i) = N(i)/(nbMeas*delta(i));
end
% Get the cumulated frequency
freq = 0;
for i=1:nbUniqueVals
freq = prob(i)*delta(i) + freq;
cumFreq(i) = freq;
end
%% PLOT THE RESULTING DISTRIBUTION
% Plot the distribution
figure
subplot(2,1,1);
pp=plot(uniqueVals,prob)
title('Distribution extracted from the time series');
xlabel('Wind speed [m/s]');
ylabel('Probability');
% Plot the cumulative distribution
subplot(2,1,2);
plot(uniqueVals,cumFreq)
title('Cumulative distribution extracted from the time series');
xlabel('Wind speed [m/s]');
ylabel('Cumulative probability');
%% EXTRACT THE PARAMETERS USING A GRAPHICAL METHOD
% See the following references for more explanations:
% - Akdag, S.A. and Dinler, A., A new method to estimate Weibull parameters
% for wind energy applications, Energy Conversion and Management,
% 50 :7 1761�1766, 2009
% - Seguro, J.V. and Lambert, T.W., Modern estimation of the parameters of
% the Weibull wind speed distribution for wind energy analysis, Journal of
% Wind Engineering and Industrial Aerodynamics, 85 :1 75�84, 2000
% Linearize distributions (see papers)
ln = log(uniqueVals);
lnln = log(-log(1-cumFreq));
% Check wether the vectors contain inifinite values, if so, remove them
test = isinf(lnln);
for i=1:nbUniqueVals
if (test(i)==1)
ln(i)= [];
lnln(i)= [];
end
end
% Extract the line parameters (y=ax+b) using the polyfit function
params = polyfit(ln,lnln',1);
a = params(1);
b = params(2);
y=a*ln+b;
% Compare the linealized curve and its fitted line
figure
plot(ln,y,'b',ln,lnln,'r')
title('Linearized curve and fitted line comparison');
xlabel('x = ln(v)');
ylabel('y = ln(-ln(1-cumFreq(v)))');
% Extract the Weibull parameters c and k
k = a
c = exp(-b/a)
%% CHECK RESULTS
% Define the cumulative Weibull probability density function
% F(V) = 1-exp(-((v/c)^k)) = 1-exp(-a2), with a1 = v/c, a2 = (v/c)^k
a1 = uniqueVals/c;
a2 = a1.^k;
cumDensityFunc = 1-exp(-a2);
% Define the Weibull probability density function
%f(v)=k/c*(v/c)^(k-1)*exp(-((v/c)^k))=k2*a3.*exp(-a2),
% with k2 = k/c, a3 = (v/c)^(k-1)
k1 = k-1;
a3 = a1.^k1;
k2 = k/c;
densityFunc = k2*a3.*exp(-a2);
% Plot and compare the obtained Weibull distribution with the frequency plot
figure
subplot(2,2,1);
pp=plot(uniqueVals,prob,'.',uniqueVals,densityFunc, 'r')
title('Weibull probability density function');
xlabel('v');
ylabel('f(v)');
subplot(2,2,3)
h=hist(v);
title('Wind speed time series');
xlabel('Measurement #');
ylabel('Wind speed [m/s]');
h=h/(sum(h)*10);
bar(h)
% Same for the cumulative distribution
subplot(2,2,2);
plot(uniqueVals,cumFreq,'.',uniqueVals,cumDensityFunc, 'r')
title('Cumulative Weibull probability density function');
xlabel('v');
ylabel('F(V)');
%inner
figure
hold on
pp=plot(uniqueVals,prob,'.',uniqueVals,densityFunc, 'r')
title('Weibull probability density function');
xlabel('v');
ylabel('f(v)');
bar(h)
hold off
%inner
%rose
w=xlsread('rose.xlsx');
dir=w(:,2)*10;
vel=w(:,1);
Options = {'anglenorth','FreqLabelAngle',0,'angleeast','FreqLabelAngle',90,'labels',{'N (0)','S (180)','E (90)','W (270)'},'freqlabelangle',45,'nDirections',20,'nFreq',25,'LegendType',1};
[figure_handle,count,speeds,directions,Table] = WindRose(dir,vel,Options);
close all; clear Options;
After a quick read of the script documentation, here is what I found concerning the creation of the windrose plot:
% With options in a cell array:
Options = {'anglenorth',0,'angleeast',90,'labels',{'N (0°)','S (180°)','E (90°)','W (270°)'},'freqlabelangle',45};
[figure_handle,count,speeds,directions,Table] = WindRose(dir,spd,Options);
% With options in a structure:
Options.AngleNorth = 0;
Options.AngleEast = 90;
Options.Labels = {'N (0°)','S (180°)','E (90°)','W (270°)'};
Options.FreqLabelAngle = 45;
[figure_handle,count,speeds,directions,Table] = WindRose(dir,spd,Options);
close all;
% Usual calling:
[figure_handle,count,speeds,directions,Table] = WindRose(dir,spd,'anglenorth',0,'angleeast',90,'labels',{'N (0°)','S (180°)','E (90°)','W (270°)'},'freqlabelangle',45);
Your error is:
Error using WindRose (line 244)
is not a valid property for WindRose function.
Error in octavetestoforiginalprogram (line 184)
[figure_handle,count,speeds,directions,Table] = WindRose(dir,vel,Options);
And it is being produced within the routine that undertakes option arguments sanitization. Since options must be provided in the form of name-value pairs... it seems that the script is detecting a mismatching number of elements and one or more names with a missing value. And here they are:
Options =
{'anglenorth','FreqLabelAngle',0,'angleeast','FreqLabelAngle',90,'labels',{'N
(0)','S (180)','E (90)','W
(270)'},'freqlabelangle',45,'nDirections',20,'nFreq',25,'LegendType',1};
The two properties marked with a bold font have no value associated to them (unlike the examples in the tutorial) and this probably messes up the whole parametrization process. Probably, the first option being extracted is anglenorth = FreqLabelAngle, which is not correct.

MATLAB: 3d reconstruction using eight point algorithm

I am trying to achieve 3d reconstruction from 2 images. Steps I followed are,
1. Found corresponding points between 2 images using SURF.
2. Implemented eight point algo to find "Fundamental matrix"
3. Then, I implemented triangulation.
I have got Fundamental matrix and results of triangulation till now. How do i proceed further to get 3d reconstruction? I'm confused reading all the material available on internet.
Also, This is code. Let me know if this is correct or not.
Ia=imread('1.jpg');
Ib=imread('2.jpg');
Ia=rgb2gray(Ia);
Ib=rgb2gray(Ib);
% My surf addition
% collect Interest Points from Each Image
blobs1 = detectSURFFeatures(Ia);
blobs2 = detectSURFFeatures(Ib);
figure;
imshow(Ia);
hold on;
plot(selectStrongest(blobs1, 36));
figure;
imshow(Ib);
hold on;
plot(selectStrongest(blobs2, 36));
title('Thirty strongest SURF features in I2');
[features1, validBlobs1] = extractFeatures(Ia, blobs1);
[features2, validBlobs2] = extractFeatures(Ib, blobs2);
indexPairs = matchFeatures(features1, features2);
matchedPoints1 = validBlobs1(indexPairs(:,1),:);
matchedPoints2 = validBlobs2(indexPairs(:,2),:);
figure;
showMatchedFeatures(Ia, Ib, matchedPoints1, matchedPoints2);
legend('Putatively matched points in I1', 'Putatively matched points in I2');
for i=1:matchedPoints1.Count
xa(i,:)=matchedPoints1.Location(i);
ya(i,:)=matchedPoints1.Location(i,2);
xb(i,:)=matchedPoints2.Location(i);
yb(i,:)=matchedPoints2.Location(i,2);
end
matchedPoints1.Count
figure(1) ; clf ;
imshow(cat(2, Ia, Ib)) ;
axis image off ;
hold on ;
xbb=xb+size(Ia,2);
set=[1:matchedPoints1.Count];
h = line([xa(set)' ; xbb(set)'], [ya(set)' ; yb(set)']) ;
pts1=[xa,ya];
pts2=[xb,yb];
pts11=pts1;pts11(:,3)=1;
pts11=pts11';
pts22=pts2;pts22(:,3)=1;pts22=pts22';
width=size(Ia,2);
height=size(Ib,1);
F=eightpoint(pts1,pts2,width,height);
[P1new,P2new]=compute2Pmatrix(F);
XP = triangulate(pts11, pts22,P2new);
eightpoint()
function [ F ] = eightpoint( pts1, pts2,width,height)
X = 1:width;
Y = 1:height;
[X, Y] = meshgrid(X, Y);
x0 = [mean(X(:)); mean(Y(:))];
X = X - x0(1);
Y = Y - x0(2);
denom = sqrt(mean(mean(X.^2+Y.^2)));
N = size(pts1, 1);
%Normalized data
T = sqrt(2)/denom*[1 0 -x0(1); 0 1 -x0(2); 0 0 denom/sqrt(2)];
norm_x = T*[pts1(:,1)'; pts1(:,2)'; ones(1, N)];
norm_x_ = T*[pts2(:,1)';pts2(:,2)'; ones(1, N)];
x1 = norm_x(1, :)';
y1= norm_x(2, :)';
x2 = norm_x_(1, :)';
y2 = norm_x_(2, :)';
A = [x1.*x2, y1.*x2, x2, ...
x1.*y2, y1.*y2, y2, ...
x1, y1, ones(N,1)];
% compute the SVD
[~, ~, V] = svd(A);
F = reshape(V(:,9), 3, 3)';
[FU, FS, FV] = svd(F);
FS(3,3) = 0; %rank 2 constrains
F = FU*FS*FV';
% rescale fundamental matrix
F = T' * F * T;
end
triangulate()
function [ XP ] = triangulate( pts1,pts2,P2 )
n=size(pts1,2);
X=zeros(4,n);
for i=1:n
A=[-1,0,pts1(1,i),0;
0,-1,pts1(2,i),0;
pts2(1,i)*P2(3,:)-P2(1,:);
pts2(2,i)*P2(3,:)-P2(2,:)];
[~,~,va] = svd(A);
X(:,i) = va(:,4);
end
XP(:,:,1) = [X(1,:)./X(4,:);X(2,:)./X(4,:);X(3,:)./X(4,:); X(4,:)./X(4,:)];
end
function [ P1,P2 ] = compute2Pmatrix( F )
P1=[1,0,0,0;0,1,0,0;0,0,1,0];
[~, ~, V] = svd(F');
ep = V(:,3)/V(3,3);
P2 = [skew(ep)*F,ep];
end
From a quick look, it looks correct. Some notes are as follows:
You normalized code in eightpoint() is no ideal.
It is best done on the points involved. Each set of points will have its scaling matrix. That is:
[pts1_n, T1] = normalize_pts(pts1);
[pts2_n, T2] = normalize-pts(pts2);
% ... code
% solution
F = T2' * F * T
As a side note (for efficiency) you should do
[~,~,V] = svd(A, 0);
You also want to enforce the constraint that the fundamental matrix has rank-2. After you compute F, you can do:
[U,D,v] = svd(F);
F = U * diag([D(1,1),D(2,2), 0]) * V';
In either case, normalization is not the only key to make the algorithm work. You'll want to wrap the estimation of the fundamental matrix in a robust estimation scheme like RANSAC.
Estimation problems like this are very sensitive to non Gaussian noise and outliers. If you have a small number of wrong correspondence, or points with high error, the algorithm will break.
Finally, In 'triangulate' you want to make sure that the points are not at infinity prior to the homogeneous division.
I'd recommend testing the code with 'synthetic' data. That is, generate your own camera matrices and correspondences. Feed them to the estimate routine with varying levels of noise. With zero noise, you should get an exact solution up to floating point accuracy. As you increase the noise, your estimation error increases.
In its current form, running this on real data will probably not do well unless you 'robustify' the algorithm with RANSAC, or some other robust estimator.
Good luck.
Good luck.
Which version of MATLAB do you have?
There is a function called estimateFundamentalMatrix in the Computer Vision System Toolbox, which will give you the fundamental matrix. It may give you better results than your code, because it is using RANSAC under the hood, which makes it robust to spurious matches. There is also a triangulate function, as of version R2014b.
What you are getting is sparse 3D reconstruction. You can plot the resulting 3D points, and you can map the color of the corresponding pixel to each one. However, for what you want, you would have to fit a surface or a triangular mesh to the points. Unfortunately, I can't help you there.
If what you're asking is how to I proceed from fundamental Matrix + corresponding points to a dense model then you still have a lot of work ahead of you.
relative camera locations (R,T) can be calculated from a fundamental matrix assuming you know the internal camera params (up to scale, rotation, translation). To get a full dense matrix there are a few ways to go. you can try using an existing library (PMVS for example). I'd look into OpenMVG but I'm not sure about matlab interface.
Another way to go, you can compute a dense optical flow (many available for matlab). Look for a epipolar OF (It takes a fundamental matrix and restricts the solution to lie on the epipolar lines). Then you can triangulate every pixel to get a depthmap.
Finally you will have to play with format conversions to get from a depthmap to VRML (You can look at meshlab)
Sorry my answer isn't more Matlab oriented.

How to compute the Cumulative Distribution Function of an image in MATLAB

I need to compute the Cumulative Distribution Function of an image. I normalized the values using the following code:
im = imread('cameraman.tif');
im_hist = imhist(im);
tf = cumsum(im_hist); %transformation function
tf_norm = tf / max(tf);
plot(tf_norm), axis tight
Also, when the CDF function is plotted, does the plot have to be somewhat a straight line which ideally should be a straight line to represent equal representation for pixel intensities?
You can obtain a CDF very easily by:
A = imread('cameraman.tif');
[histIM, bins] = imhist(A);
cdf = cumsum(counts) / sum(counts);
plot(cdf); % If you want to be more precise on the X axis plot it against bins
For the famous cameraman.tif it results in:
As for your second question. When the histogram is perfectly equalized (i.e. when at each intensity correspond roughly the same number of pixels) your CDF will look like a straight 45° line.
EDIT: Strictly speaking cumsum alone is not a proper CDF as a CDF describe a probability, hence it must obey probability axioms. In particular the first axiom of probability tell us that a probability value should lie in the range [0 ... 1] and cumsum alone does not guarantee that.
function icdf = imgcdf(img)
% Author: Javier Montoya (jmontoyaz#gmail.com).
% http://www.lis.ic.unicamp.br/~jmontoya
%
% IMGCDF calculates the Cumulative Distribution Function of image I.
% Input parameters:
% img: image I (passed as a bidimensional matrix).
% Ouput parameters:
% icdf: cumulative distribution function.
%
% See also: IMGHIST
%
% Usage:
% I = imread('tire.tif');
% icdf = imgcdf(I);
% figure; stem(icdf); title('Cumulative Distribution Function (CDF)');
if exist('img', 'var') == 0
error('Error: Specify an input image.');
end
icdf = [];
ihist = imghist(img);
maxgval = 255;
icdf = zeros(1,maxgval);
icdf(1)= ihist(1);
for i=2:1:maxgval+1
icdf(i) = ihist(i) + icdf(i-1);
end
end
Its not my code but it works for me! Also check the cdf function in the statistics toolbox

Matlab:K-means clustering

I have a matrice of A(369x10) which I want to cluster in 19 clusters.
I use this method
[idx ctrs]=kmeans(A,19)
which yields
idx(369x1) and ctrs(19x10)
I get the point up to here.All my rows in A is clustered in 19 clusters.
Now I have an array B(49x10).I want to know where the rows of this B corresponds in the among given 19 clusters.
How is it possible in MATLAB?
Thank you in advance
The following is a a complete example on clustering:
%% generate sample data
K = 3;
numObservarations = 100;
dimensions = 3;
data = rand([numObservarations dimensions]);
%% cluster
opts = statset('MaxIter', 500, 'Display', 'iter');
[clustIDX, clusters, interClustSum, Dist] = kmeans(data, K, 'options',opts, ...
'distance','sqEuclidean', 'EmptyAction','singleton', 'replicates',3);
%% plot data+clusters
figure, hold on
scatter3(data(:,1),data(:,2),data(:,3), 50, clustIDX, 'filled')
scatter3(clusters(:,1),clusters(:,2),clusters(:,3), 200, (1:K)', 'filled')
hold off, xlabel('x'), ylabel('y'), zlabel('z')
%% plot clusters quality
figure
[silh,h] = silhouette(data, clustIDX);
avrgScore = mean(silh);
%% Assign data to clusters
% calculate distance (squared) of all instances to each cluster centroid
D = zeros(numObservarations, K); % init distances
for k=1:K
%d = sum((x-y).^2).^0.5
D(:,k) = sum( ((data - repmat(clusters(k,:),numObservarations,1)).^2), 2);
end
% find for all instances the cluster closet to it
[minDists, clusterIndices] = min(D, [], 2);
% compare it with what you expect it to be
sum(clusterIndices == clustIDX)
I can't think of a better way to do it than what you described. A built-in function would save one line, but I couldn't find one. Here's the code I would use:
[ids ctrs]=kmeans(A,19);
D = dist([testpoint;ctrs]); %testpoint is 1x10 and D will be 20x20
[distance testpointID] = min(D(1,2:end));
I don't know if I get your meaning right, but if you want to know which cluster your points belong you can use KnnSearch function easily. It has two arguments and will search in first argument for the first one of them that is closest to argument two.
Assuming you're using squared euclidean distance metric, try this:
for i = 1:size(ctrs,2)
d(:,i) = sum((B-ctrs(repmat(i,size(B,1),1),:)).^2,2);
end
[distances,predicted] = min(d,[],2)
predicted should then contain the index of the closest centroid, and distances should contain the distances to the closest centroid.
Take a look inside the kmeans function, at the subfunction 'distfun'. This shows you how to do the above, and also contains the equivalents for other distance metrics.
for small amount of data, you could do
[testpointID,dum] = find(permute(all(bsxfun(#eq,B,permute(ctrs,[3,2,1])),2),[3,1,2]))
but this is somewhat obscure; the bsxfun with the permuted ctrs creates a 49 x 10 x 19 array of booleans, which is then 'all-ed' across the second dimension, permuted back and then the row ids are found. again, probably not practical for large amounts of data.