Using Optimization algorithm for white matter fiber tracking - matlab

Actually I use MATLAB code to implement my work which it's summarize as a following:
I used a program applied the Bayesian approach to solve fiber tracking, I have a problem in load the dataset, which it's Medical Image. I need any hint to help me to open this data set to continue my work. the following code shows the load dataset function:
function data = Load_DMRI_Data(dataset)
% Load diffusion tensor MRI data and return it in
% a structure together with gradients and b-values.
% The data must be stored in a data structure for further
% processing, see code.
% The gradients are stored as a(3xg) matrix, where g is the
% number of acquired DWI volumens (including the b=0 ones).
% The b-values are stored in a corresponding (1xg) vector.
if strcmp(dataset,'gordon')
readdir = '/projects/lmi/data/diffusion/gk-3t/041020-02156-bvalexpr/data/003/';
intensity = zeros(256,256,31,32);
for g = 1:32
for slice = 1:31
fid = fopen(sprintf('%sI.%03d',readdir,(g-1)*31+slice),'r');
im = fread(fid,'int16');
im = im(end-256^2+1:end);
intensity(:,:,slice,g) = reshape(im,[256 256])';
fclose(fid);
end
end
G = load('gradients.mat');
G = [[1;1;1] G.g]; % Add arbitrary gradient direction for b=0
b = [0 1000*ones(1,31)];
data = struct('intensity',intensity,'G',G,'b',b,'FOV',240,'SliceThickness',4);
elseif strcmp(dataset,'pc')
readdir = 'c:\Work\DiffusionData\';
intensity = zeros(256,256,1,32);
for g = 1:32
for slice = 16:16 % Use slice 16 as test slice
fid = fopen(sprintf('%sI.%03d',readdir,(g-1)*31+slice),'r','ieee-be');
im = fread(fid,'int16');
im = im(end-256^2+1:end);
intensity(:,:,slice-15,g) = reshape(im,[256 256])';
fclose(fid);
end
end
G = load(sprintf('%sgradients.mat',readdir));
G = [[1;1;1] G.g]; % Add arbitrary gradient direction for b=0
b = [0 1000*ones(1,31)]; % b-values
data = struct('intensity',intensity,'G',G,'b',b);
end
The website for dataset : http://www.sci.utah.edu/~gk/DTI-data/

Related

Surface plot using multiple excel files

I'm just a beginner with MATLAB.
I need to represent my data using a 3D plot. I tried using the surf function but in my case this is a little bit tricky. From the code you will see that I am looping through multiple excel files to extract data from it.
Using a 3-dimensional plot I need to represent my Fv array as the 'X' coordinate, my FTsiga as the 'Y' coordinate and the the third coordinate should be each excel file that I am looping through.
A plot of (Fv,FTsiga) looks like the figure that is attached herewith.
Fv(x-axis) , FTsiga(y-axis)
The code that I have written so far couldn't seem to execute because MATLAB crashed due to insufficient memory or because it got stuck in a loop. The latter is more likely I guess.
% Matlab trial code: Trying to loop through excel files strored in directory
source_dir = 'C:\UTwente\Q4\Structural Health and Condition monitoring\Case Roadbridge (Zwartewaterbrug)\Excel data'
source_files = dir(fullfile(source_dir, '*xlsx'));
len = length(source_files);
matrix = zeros(len,32);
X = zeros(len,32); Y = zeros(len,32); Z = zeros(len,32);
%looping through excel file in directory
for i=1:len
data= xlsread(source_files(i).name,'Measurement data');
for j=1:32
sig = data(:,j);
sig = sig - mean(sig); % Remove d-c Offset
L = length(sig);
Fs = 1000; % Sampling Frequency
Fn = Fs/2; % Nyquist Frequency
FTsig = fft(sig)/L;
Fv = linspace(0, 1, fix(length(FTsig)/2)+1)*Fn; % Frequency Vector
Iv = 1:length(Fv); % Index Vector
FTsiga = double(abs(FTsig(Iv))*2); % Truncate, Magnitude, Convert To Double
sgf_sm = sgolayfilt(FTsiga, 5, 501); % Create ‘sgolayfilt’ Filtered FFT
[~, idx] = max( sgf_sm ); % Getting the value of the modal frequency
peakfreq = Fv( idx );
matrix(i,j) = peakfreq; % Matrix of all peak frequencies
[X,Y,Z] = meshgrid(Fv,FTsiga,i);
end
surf(Fv,FTsiga,i);
end

Get the size of HOG feature vector - MATLAB

I'm a beginner in image processing and I'm using MATLAB to extract HOG features from the images to train SVM classifier. The size of the training images is 480*640 pixels and I'm getting 167796 features with the default settings for the built-in MATLAB extractHOGFeatures function. However, when I test the model it gives me less features (216 features only!) knowing that the testing images have the same size of the training images. I get this error in MATLAB "The number of columns in TEST and training data must be equal".
Do you have any clue how to solve this problem and get feature vector with the same size for the training and testing sets?
Here is the code,
[fpos,fneg] = featuress(pathPos, pathNeg);
%train SVM
HOG_featV = loadingV(fpos,fneg); % loading and labeling each training example
%% Detection
tSize = [24 32];
testImPath = '.\face_detection\dataset\bikes_and_persons2\';
imlist = dir([testImPath '*.bmp']);
for j = 1:length(imlist)
disp ('inside for loop');
img = imread([testImPath imlist(j).name]);
axis equal; axis tight; axis off;
imshow(img); hold on;
detect(img,model,tSize);
%% training
function [fpos, fneg] = featuress(pathPos,pathNeg)
% extract features for positive examples
imlist = dir([pathPos '*.bmp']);
for i = 1:length(imlist)
im = imread([pathPos imlist(i).name]);
fpos{i} = extractHOGFeatures(double(im));
end
% extract features for negative examples
imlist = dir([pathNeg '*.bmp']);
for i = 1:length(imlist)
im = imread([pathNeg imlist(i).name]);
fneg{i} = extractHOGFeatures(double(im));
end
end
%% testing function
function detect(im,model,wSize)
topLeftRow = 1;
topLeftCol = 1;
[bottomRightCol bottomRightRow d] = size(im);
fcount = 1;
for y = topLeftCol:bottomRightCol-wSize(2)
for x = topLeftRow:bottomRightRow-wSize(1)
p1 = [x,y];
p2 = [x+(wSize(1)-1), y+(wSize(2)-1)];
po = [p1; p2];
img = imcut(po,im);
featureVector{fcount} = extractHOGFeatures(double(img));
boxPoint{fcount} = [x,y];
fcount = fcount+1;
x = x+1;
end
end
lebel = ones(length(featureVector),1);
P = cell2mat(featureVector');
% each row of P' correspond to a window
[ predictions] = svmclassify(model, P); % classifying each window
[a, indx]= max(predictions);
bBox = cell2mat(boxPoint(indx));
rectangle('Position',[bBox(1),bBox(2),24,32],'LineWidth',1, 'EdgeColor','r');
end
Thanks in advance.
What's the size of P? Is it 167796 x 216? If so then, you should not transpose featureVector when you call cell2mat. Or you should transpose P before you use it. You can also make featureVector a matrix rather than a cell array. Since you know that the length of the HOG vector is 167796 and you know how many images you have, you can pre-allocate it up front, and fill in the rows.

Decrease Matlab for loop

% read image
I = imread(im);
H = zeros(256,1);
% m = width, n = height
[m,n] = size(I);
% loop for checking values
for GrayValue = 0:255
for i = 1:m
for j = 1:n
if I(i,j) == GrayValue % read each pixel coordinate
H(GrayValue+1) = H(GrayValue+1)+1;
end
end
end
end
This function it go get image file as im and will show a histogram of values count in the image. How can i reduce the time taken to execute this MATLAB function. It is similar to imhist() . But I want to make a similar imhist() function. I cannot figure out to remove which loop.
Obligatory bsxfun solution:
H=sum(bsxfun(#eq,I(:),0:255))
And a method that might be easier to understand. This just replaces your two inner loops with a search.
I=randi(256,50,50)-1;
H=zeros(256,1);
G=zeros(256,1);
for GrayValue=0:255
[ii,jj]=find(I==GrayValue);
G(GrayValue+1)=length(ii);
end
Assuming I to be a grayscale image, you can use histc to get H -
H = sum(histc(I,0:255),2)
Thus, the complete code would be -
% read image
I = imread(im);
H = sum(histc(I,0:255),2)
If you were looking for a little less-advanced code, you can avoid one nested loop -
% read image
I = imread(im);
H = zeros(256,1);
% loop for checking values
for GrayValue = 0:255
for i = 1:numel(I)
if I(i) == GrayValue % read each pixel coordinate
H(GrayValue+1) = H(GrayValue+1)+1;
end
end
end

Matlab plot in loop error

I am creating figures in a for loop. The figure is a 2D mesh plot, which is supposed to be updated every iteration. The value to be plotted in a 200x200 array.
My problem is: It seems the calculation is running every iteration, but the plot is always the first one created, no matter I just plot or save to file.
Here is my code:
x = 1:200;
y = x;
for i = 1:100000
c = calculate(stuff, c); % value to be created, nothing to do with x and y
h = figure;
mesh(x,y,c);
saveas(h, sprintf('FIG%d.jpg',i);
drawnow; % did not work with or without this command
close(h);
end
First, thank you for all your inputs and suggestions! I didn't expect to get so many help within such a short time!
Then, I can answer some of the confusions here.
To Daniel: yes the c is changing. The program is calculating c based on its previous value. And there is sufficient step for c to change.
To R.Schifini: I tried pause(.1) but it didn't help unfortunately
To Andrew: thanks for pointing it. The complete program is attached now. And as to Daniel, the program calculate the value of c based on its previous values.
To The-Duck: I tried clf(h, 'reset') but unfortunately it didn't help.
Complete code:
Main program: please refer to wikipedia for the physical equation if you are interested
http://en.wikipedia.org/wiki/Cahn%E2%80%93Hilliard_equation
% Program to calculate composition evolution for nucleation and growth
% by solving Cahn-Hilliard equation - Time dependent non-linear
% differential equation
% Parameter
sig = 0.1; % J/m^2
delta = 10E-9; % m
D = 1E-9; %m^2/s
A = 10*sig/delta; % J/m
K = 3*sig*delta; % J/m^3
M = D/(2*A); % m^2/s
N = 200; % mesh size
dt = 1E-12; %s
h = delta/10;
% Rng control
r = -1+2.*rand(N);
beta = 1E-3;
n = 10000;
% initialization
c0 = zeros(200);
c0 = c0+ 0.1+beta.*r;
c = c0;
x = h.*linspace(-N/2,N/2,N);
y=x;
% Iteration
for i = 1:n
LP_c = laplacian(c,h);
d_f = A*(4*(c.^3)-6*(c.^2)+2*c);
sub = d_f - (2*K)*LP_c;
LP_RHS = laplacian(sub,h);
RHS = M*LP_RHS;
c = c + dt.*RHS;
% Save image every 2000 steps
% if ( i==1000 || i==10000 || i==100000)
% h = mesh(x,y,c);
% pause(.1);
% saveas(h, sprintf('FIG%d.jpg',i));
% clf(h,'reset');
% end
end
%h = figure;
mesh(x,y,c);
Laplacian function:
function LP_c = laplacian(c,h)
v1 = circshift(c,[0 -1]);
v2 = circshift(c,[0 1]);
v3 = circshift(c,[-1 0]);
v4 = circshift(c,[1 0]);
LP_c = (v1+v2+v3+v4-4.*c)./(h^2);
end
Result:
You can see the commented part in main program is for plotting periodically. They all give the same plots for each iteration. I tried the current OR version, also tried if ( mod(i,2000) == 0) to plot more pics. There is no difference. Shown:
However, if I comment out the periodic plotting, just run the program for different values of n, I got different plots, and they obey physical laws (evolving structure), shown in time order
Therefore I excluded the possibility that c might not update itself. It has to be some misuse of the plotting function of matlab. Or maybe some memory issue?
An interesting point I discovered during edition session: If I put the command h = figure in front of the loop and plot after the loop end, like this:
h = figure;
% Iteration
for i = 1:n
LP_c = laplacian(c,h);
d_f = A*(4*(c.^3)-6*(c.^2)+2*c);
sub = d_f - (2*K)*LP_c;
LP_RHS = laplacian(sub,h);
RHS = M*LP_RHS;
c = c + dt.*RHS;
end
mesh(x,y,c);
It seems all value of c calculated during the loop will overlap and give a figure shown below: I guess this indicates some facts about the plotting function of matlab, but I am not sure
Btw, can I answer directly to each comment and high light the new added section in my post? Sorry I am not as familiar with Stack Overlow as I should have :)
I ran your routine and with the following changes it works for me:
% Iteration
for i = 1:n
LP_c = laplacian(c,h);
d_f = A*(4*(c.^3)-6*(c.^2)+2*c);
sub = d_f - (2*K)*LP_c;
LP_RHS = laplacian(sub,h);
RHS = M*LP_RHS;
c = c + dt.*RHS;
% Save image every 2000 steps
if ( mod(i,2000)==0)
h1 = mesh(x,y,c);
drawnow;
saveas(h1, sprintf('FIG%d.jpg',i));
end
end
The main change is the figure handle variable from h to h1. Why? You are already using variable h in your equations.
Regards,

Matrices kernelpca

we are working on a project and trying to get some results with KPCA.
We have a dataset (handwritten digits) and have taken the 200 first digits of each number so our complete traindata matrix is 2000x784 (784 are the dimensions).
When we do KPCA we get a matrix with the new low-dimensionality dataset e.g.2000x100. However we don't understand the result. Shouldn;t we get other matrices such as we do when we do svd for pca? the code we use for KPCA is the following:
function data_out = kernelpca(data_in,num_dim)
%% Checking to ensure output dimensions are lesser than input dimension.
if num_dim > size(data_in,1)
fprintf('\nDimensions of output data has to be lesser than the dimensions of input data\n');
fprintf('Closing program\n');
return
end
%% Using the Gaussian Kernel to construct the Kernel K
% K(x,y) = -exp((x-y)^2/(sigma)^2)
% K is a symmetric Kernel
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
% We know that for PCA the data has to be centered. Even if the input data
% set 'X' lets say in centered, there is no gurantee the data when mapped
% in the feature space [phi(x)] is also centered. Since we actually never
% work in the feature space we cannot center the data. To include this
% correction a pseudo centering is done using the Kernel.
one_mat = ones(size(K));
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
clear K
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
we have read lots of papers but still cannot get the hand of kpca's logic!
Any help would be appreciated!
PCA Algorithm:
PCA data samples
Compute mean
Compute covariance
Solve
: Covariance matrix.
: Eigen Vectors of covariance matrix.
: Eigen values of covariance matrix.
With the first n-th eigen vectors you reduce the dimensionality of your data to the n dimensions. You can use this code for the PCA, it has an integraded example and it is simple.
KPCA Algorithm:
We choose a kernel function in you code this is specified by:
K(x,y) = -exp((x-y)^2/(sigma)^2)
in order to represent your data in a high dimensional space hopping that, in this space your data will be well represented for further porposes like classification or clustering whereas this task could be harder to be solved in the initial feature space. This trick is aslo known as "Kernel trick". Look figure.
[Step1] Constuct gram matrix
K = zeros(size(data_in,2),size(data_in,2));
for row = 1:size(data_in,2)
for col = 1:row
temp = sum(((data_in(:,row) - data_in(:,col)).^2));
K(row,col) = exp(-temp); % sigma = 1
end
end
K = K + K';
% Dividing the diagonal element by 2 since it has been added to itself
for row = 1:size(data_in,2)
K(row,row) = K(row,row)/2;
end
Here because the gram matrix is symetric the half of the values are computed and the final result is obtained by adding the computed so far gram matrix and its transpose. Finally, we divide by 2 as the comments mention.
[Step2] Normalize the kernel matrix
This is done by this part of your code:
K_center = K - one_mat*K - K*one_mat + one_mat*K*one_mat;
As the comments mention a pseudocentering procedure must be done. For an idea about the proof here.
[Step3] Solve the eigenvalue problem
For this task this part of the code is responsible.
%% Obtaining the low dimensional projection
% The following equation needs to be satisfied for K
% N*lamda*K*alpha = K*alpha
% Thus lamda's has to be normalized by the number of points
opts.issym=1;
opts.disp = 0;
opts.isreal = 1;
neigs = 30;
[eigvec eigval] = eigs(K_center,[],neigs,'lm',opts);
eig_val = eigval ~= 0;
eig_val = eig_val./size(data_in,2);
% Again 1 = lamda*(alpha.alpha)
% Here '.' indicated dot product
for col = 1:size(eigvec,2)
eigvec(:,col) = eigvec(:,col)./(sqrt(eig_val(col,col)));
end
[~, index] = sort(eig_val,'descend');
eigvec = eigvec(:,index);
[Step4] Change representaion of each data point
For this task this part of the code is responsible.
%% Projecting the data in lower dimensions
data_out = zeros(num_dim,size(data_in,2));
for count = 1:num_dim
data_out(count,:) = eigvec(:,count)'*K_center';
end
Look the details here.
PS: I encurage you to use code written from this author and contains intuitive examples.