The following function calculates the Gaussian Kernel and is part of the Kernel Ridge Regression algorithm that I wrote. I was wondering how could I modify this function properly in order to improve the execution time (i.e. get rid of the two for loops). Any ideas?
function [K] = calculate_krr_gaussiankernel(Xi,Xj,S)
K = zeros(size(Xi,1),size(Xj,1));
for Ixi = 1:size(Xi,1),
for Ixj = 1:size(Xj,1),
K(Ixi,Ixj) = exp((-norm(Xi(Ixi,:) - Xj(Ixj,:)) .^ 2) ./ (2 * (S .^ 2)));
end
end
end
EDIT: The formula:
Here's a version that's most likely faster. It might however give rise to memory issues for large Xi/Xj.
function K = calculate_krr_gaussiankernel(Xi, Xj, S)
%# create an array of difference between Xi(r,:) and Xj(s,:) for all r,s
delta = bsxfun(#minus, permute(Xi,[1 3 2]), permute(Xj,[3 1 2]));
%# calculate the squared norm
ssq = sum(delta.^2, 3);
%# calculate the kernel
K = exp(-ssq./(2*S.^2));
Here's an explanation of what I'm doing:
the bsxfun line: I reshape the inputs, such that I can get, at every (i,j), the difference vector in the third dimension
the ssq line simply takes the sum of squares. I could take the square root here and thus get the norm, but since we'll square that again, anyway, there's no point in that.
the final line implements the formula in the OP, where ssq is the squared norm of the differences.
You can certainly double the speed (approximately) since K is symmetric. In addition you can calculate the norm of the difference vector and then make a single call to exp() which may be faster than calling exp() over and over again. Putting this together:
function [K] = calculate_krr_gaussiankernel(Xi,Xj,S)
arg = zeros(size(Xi,1),size(Xj,1));
for Ixi = 1:size(Xi,1),
% diagonal elements can be done in outer loop:
arg(Ixi,Ixi) = norm(Xi(Ixi,:) - Xj(Ixi,:));
for Ixj = Ixi+1:size(Xj,1), % off-diagonals done once and copied
arg(Ixi,Ixj) = norm(Xi(Ixi,:) - Xj(Ixj,:));
arg(Ixj,Ixi) = arg(Ixi,Ixj);
end
end
end
K = exp(( -arg.^ 2) ./ (2 * (S .^ 2)))
Related
I'm trying to verify if my implementation of Logistic Regression in Matlab is good. I'm doing so by comparing the results I get via my implementation with the results given by the built-in function mnrfit.
The dataset D,Y that I have is such that each row of D is an observation in R^2 and the labels in Y are either 0 or 1. Thus, D is a matrix of size (n,2), and Y is a vector of size (n,1)
Here's how I do my implementation:
I first normalize my data and augment it to include the offset :
d = 2; %dimension of data
M = mean(D) ;
centered = D-repmat(M,n,1) ;
devs = sqrt(sum(centered.^2)) ;
normalized = centered./repmat(devs,n,1) ;
X = [normalized,ones(n,1)];
I will be doing my calculations on X.
Second, I define the gradient and hessian of the likelihood of Y|X:
function grad = gradient(w)
grad = zeros(1,d+1) ;
for i=1:n
grad = grad + (Y(i)-sigma(w'*X(i,:)'))*X(i,:) ;
end
end
function hess = hessian(w)
hess = zeros(d+1,d+1) ;
for i=1:n
hess = hess - sigma(w'*X(i,:)')*sigma(-w'*X(i,:)')*X(i,:)'*X(i,:) ;
end
end
with sigma being a Matlab function encoding the sigmoid function z-->1/(1+exp(-z)).
Third, I run the Newton algorithm on gradient to find the roots of the gradient of the likelihood. I implemented it myself. It behaves as expected as the norm of the difference between the iterates goes to 0. I wrote it based on this script.
I verified that the gradient at the wOPT returned by my Newton implementation is null:
gradient(wOP)
ans =
1.0e-15 *
0.0139 -0.0021 0.2290
and that the hessian has strictly negative eigenvalues
eig(hessian(wOPT))
ans =
-7.5459
-0.0027
-0.0194
Here's the wOPT I get with my implementation:
wOPT =
-110.8873
28.9114
1.3706
the offset being the last element. In order to plot the decision line, I should convert the slope wOPT(1:2) using M and devs. So I set :
my_offset = wOPT(end);
my_slope = wOPT(1:d)'.*devs + M ;
and I get:
my_slope =
1.0e+03 *
-7.2109 0.8166
my_offset =
1.3706
Now, when I run B=mnrfit(D,Y+1), I get
B =
-1.3496
1.7052
-1.0238
The offset is stored in B(1).
I get very different values. I would like to know what I am doing wrong. I have some doubt about the normalization and 'un-normalization' process. But I'm not sure, may be I'm doing something else wrong.
Additional Info
When I tape :
B=mnrfit(normalized,Y+1)
I get
-1.3706
110.8873
-28.9114
which is a rearranged version of the opposite of my wOPT. It contains exactly the same elements.
It seems likely that my scaling back of the learnt parameters is wrong. Otherwise, it would have given the same as B=mnrfit(D,Y+1)
I have a probability matrix(glcm) of size 256x256x20. I have reshaped the matrix to
65536x20, so that I can eliminate one loop (along the 3rd dimension).
I want to do the following calculation.
for y = 1:256
for x = 1:256
if (ismember((x + y),(2:2*256)))
p_xplusy((x+y),:) = p_xplusy((x+y),:) + glcm(((y-1)*256+x),:);
end
end
end
So the p_xplusy will be a 511x20 matrix which each element is the sum of the diagonal of nxn sub matrix (where n belongs to 1:256) of the original 256x256x20 matrix.
This code block is making my program inefficient and I want to vectorize this loop. Any help would be appreciated.
Since your if statement is just checking whether x+y is less than or equal to 256, just force it to always be, and remove excess loops:
for y = 1:256
for x = 1:256-y
p_xplusy((x+y),:) = p_xplusy((x+y),:) + glcm(((y-1)*256+x),:);
end
end
This should provide a noticeable speed-up to your code.
You can reduce the complexity from O(n^2) to O(2*n) and thus improve runtime efficiency -
N = 256;
for k1 = 1:N
idx_glcm = k1:N-1:N*(k1-1)+1;
p_xplusy(k1+1,:) = p_xplusy(k1+1,:) + sum(glcm(idx_glcm,:),1);
end
for k1 = 2:N
idx_glcm = k1*N:N-1:N*(N-1)+k1;
p_xplusy(N+k1,:) = p_xplusy(N+k1,:) + sum(glcm(idx_glcm,:),1);
end
Some quick runtime tests seem to confirm our efficiency theory too.
I need help finding an integral of a function using trapezoidal sums.
The program should take successive trapezoidal sums with n = 1, 2, 3, ...
subintervals until there are two neighouring values of n that differ by less than a given tolerance. I want at least one FOR loop within a WHILE loop and I don't want to use the trapz function. The program takes four inputs:
f: A function handle for a function of x.
a: A real number.
b: A real number larger than a.
tolerance: A real number that is positive and very small
The problem I have is trying to implement the formula for trapezoidal sums which is
Δx/2[y0 + 2y1 + 2y2 + … + 2yn-1 + yn]
Here is my code, and the area I'm stuck in is the "sum" part within the FOR loop. I'm trying to sum up 2y2 + 2y3....2yn-1 since I already accounted for 2y1. I get an answer, but it isn't as accurate as it should be. For example, I get 6.071717974723753 instead of 6.101605982576467.
Thanks for any help!
function t=trapintegral(f,a,b,tol)
format compact; format long;
syms x;
oldtrap = ((b-a)/2)*(f(a)+f(b));
n = 2;
h = (b-a)/n;
newtrap = (h/2)*(f(a)+(2*f(a+h))+f(b));
while (abs(newtrap-oldtrap)>=tol)
oldtrap = newtrap;
for i=[3:n]
dx = (b-a)/n;
trapezoidsum = (dx/2)*(f(x) + (2*sum(f(a+(3:n-1))))+f(b));
newtrap = trapezoidsum;
end
end
t = newtrap;
end
The reason why this code isn't working is because there are two slight errors in your summation for the trapezoidal rule. What I am precisely referring to is this statement:
trapezoidsum = (dx/2)*(f(x) + (2*sum(f(a+(3:n-1))))+f(b));
Recall the equation for the trapezoidal integration rule:
Source: Wikipedia
For the first error, f(x) should be f(a) as you are including the starting point, and shouldn't be left as symbolic. In fact, you should simply get rid of the syms x statement as it is not useful in your script. a corresponds to x1 by consulting the above equation.
The next error is the second term. You actually need to multiply your index values (3:n-1) by dx. Also, this should actually go from (1:n-1) and I'll explain later. The equation above goes from 2 to N, but for our purposes, we are going to go from 1 to N-1 as you have your code set up like that.
Remember, in the trapezoidal rule, you are subdividing the finite interval into n pieces. The ith piece is defined as:
x_i = a + dx*i; ,
where i goes from 1 up to N-1. Note that this starts at 1 and not 3. The reason why is because the first piece is already taken into account by f(a), and we only count up to N-1 as piece N is accounted by f(b). For the equation, this goes from 2 to N and by modifying the code this way, this is precisely what we are doing in the end.
Therefore, your statement actually needs to be:
trapezoidsum = (dx/2)*(f(a) + (2*sum(f(a+dx*(1:n-1))))+f(b));
Try this and let me know if you get the right answer. FWIW, MATLAB already implements trapezoidal integration by doing trapz as #ADonda already pointed out. However, you need to properly structure what your x and y values are before you set this up. In other words, you would need to set up your dx before hand, then calculate your x points using the x_i equation that I specified above, then use these to generate your y values. You then use trapz to calculate the area. In other words:
dx = (b-a) / n;
x = a + dx*(0:n);
y = f(x);
trapezoidsum = trapz(x,y);
You can use the above code as a reference to see if you are implementing the trapezoidal rule correctly. Your implementation and using the above code should generate the same results. All you have to do is change the value of n, then run this code to generate the approximation of the area for different subdivisions underneath your curve.
Edit - August 17th, 2014
I figured out why your code isn't working. Here are the reasons why:
The for loop is unnecessary. Take a look at the for loop iteration. You have a loop going from i = [3:n] yet you don't reference the i variable at all in your loop. As such, you don't need this at all.
You are not computing successive intervals properly. What you need to do is when you compute the trapezoidal sum for the nth subinterval, you then increment this value of n, then compute the trapezoidal rule again. This value is not being incremented properly in your while loop, which is why your area is never improving.
You need to save the previous area inside the while loop, then when you compute the next area, that's when you determine whether or not the difference between the areas is less than the tolerance. We can also get rid of that code at the beginning that tries and compute the area for n = 2. That's not needed, as we can place this inside your while loop. As such, this is what your code should look like:
function t=trapintegral(f,a,b,tol)
format long; %// Got rid of format compact. Useless
%// n starts at 2 - Also removed syms x - Useless statement
n = 2;
newtrap = ((b-a)/2)*(f(a) + f(b)); %// Initialize
oldtrap = 0; %// Initialize to 0
while (abs(newtrap-oldtrap)>=tol)
oldtrap = newtrap; %//Save the old area from the previous iteration
dx = (b-a)/n; %//Compute width
%//Determine sum
trapezoidsum = (dx/2)*(f(a) + (2*sum(f(a+dx*(1:n-1))))+f(b));
newtrap = trapezoidsum; % //This is the new sum
n = n + 1; % //Go to the next value of n
end
t = newtrap;
end
By running your code, this is what I get:
trapezoidsum = trapintegral(#(x) (x+x.^2).^(1/3),1,4,0.00001)
trapezoidsum =
6.111776299189033
Caveat
Look at the way I defined your function. You must use element-by-element operations as the sum command inside the loop will be vectorized. Take a look at the ^ operations specifically. You need to prepend a dot to the operations. Once you do this, I get the right answer.
Edit #2 - August 18th, 2014
You said you want at least one for loop. This is highly inefficient, and whoever specified having one for loop in the code really doesn't know how MATLAB works. Nevertheless, you can use the for loop to accumulate the sum term. As such:
function t=trapintegral(f,a,b,tol)
format long; %// Got rid of format compact. Useless
%// n starts at 3 - Also removed syms x - Useless statement
n = 3;
%// Compute for n = 2 first, then proceed if we don't get a better
%// difference tolerance
newtrap = ((b-a)/2)*(f(a) + f(b)); %// Initialize
oldtrap = 0; %// Initialize to 0
while (abs(newtrap-oldtrap)>=tol)
oldtrap = newtrap; %//Save the old area from the previous iteration
dx = (b-a)/n; %//Compute width
%//Determine sum
%// Initialize
trapezoidsum = (dx/2)*(f(a) + f(b));
%// Accumulate sum terms
%// Note that we multiply each term by (dx/2), but because of the
%// factor of 2 for each of these terms, these cancel and we thus have dx
for n2 = 1 : n-1
trapezoidsum = trapezoidsum + dx*f(a + dx*n2);
end
newtrap = trapezoidsum; % //This is the new sum
n = n + 1; % //Go to the next value of n
end
t = newtrap;
end
Good luck!
I'm trying to write a Non-Local Means filter for an assignment. I've written the code in two ways, but the method I'd expect to be quicker is much slower than the other method.
Method 1: (This method is slower)
for i = 1:size(I,1)
tic
sprintf('%d/%d',i,size(I,1))
for j = 1:size(I,2)
w = exp((-abs(I-I(i,j))^2)/(h^2));
Z = sum(sum(w));
w = w/Z;
sumV = w .* I;
NL(i,j) = sum(sum(sumV));
end
toc
end
Method 2: (This method is faster)
for i = 1:size(I,1)
tic
sprintf('%d/%d',i,size(I,1))
for j = 1:size(I,2)
Z = 0;
for k = 1:size(I,1)
for l = 1:size(I,2)
w = exp((-abs(I(i,j)-I(k,l))^2)/(h^2));
Z = Z + w;
end
end
sumV = 0;
for k = 1:size(I,1)
for l = 1:size(I,2)
w = exp((-abs(I(i,j)-I(k,l))^2)/(h^2));
w = w/Z;
sumV = sumV + w * I(k,l);
end
end
NL(i,j) = sumV;
end
toc
end
I really thought that MATLAB would be optimized for Matrix operations. Is there reason it isn't in this code? The difference is pretty large. For a 512x512 image, with h = 0.05, one iteration of the outer loop takes 24-28 seconds for Method 1 and 10-12 seconds for Method 2.
The two methods are not doing the same thing. In Method 2, the term abs(I(i,j)-I(k,l)) in the w= expression is being squared, which is fine because the term is just a single numeric value.
However, in Method 1, the term abs(I-I(i,j)) is actually a matrix (The numeric value I(i,j) is being subtracted from every element in the matrix I, returning a matrix again). So, when this term is squared with the ^ operator, matrix multiplication is happening. My guess, based on Method 2, is that this is not what you intended. If instead, you want to square each element in that matrix, then use the .^ operator, as in abs(I-I(i,j)).^2
Matrix multiplication is a much more computation intensive operation, which is likely why Method 1 takes so much longer.
My guess is that you have not preassigned NL, that both methods are in the same function (or are scripts and you didn't clear NL between function runs). This would have slowed the first method by quite a bit.
Try the following: Create a function for both methods. Run each method once. Then use the profiler to see where each function spends most of its time.
A much faster implementation (Vectorized) could be achieved using im2col:
Create a Vector out of each neighborhood.
Using predefined indices calculate the distance between each patch.
Sum over the values and the weights using sum function.
This method will work with no loop at all.
The problem says:
Three tensile tests were carried out on an aluminum bar. In each test the strain was measured at the same values of stress. The results were
where the units of strain are mm/m.Use linear regression to estimate the modulus of elasticity of the bar (modulus of elasticity = stress/strain).
I used this program for this problem:
function coeff = polynFit(xData,yData,m)
% Returns the coefficients of the polynomial
% a(1)*x^(m-1) + a(2)*x^(m-2) + ... + a(m)
% that fits the data points in the least squares sense.
% USAGE: coeff = polynFit(xData,yData,m)
% xData = x-coordinates of data points.
% yData = y-coordinates of data points.
A = zeros(m); b = zeros(m,1); s = zeros(2*m-1,1);
for i = 1:length(xData)
temp = yData(i);
for j = 1:m
b(j) = b(j) + temp;
temp = temp*xData(i);
end
temp = 1;
for j = 1:2*m-1
s(j) = s(j) + temp;
temp = temp*xData(i);
end
end
for i = 1:m
for j = 1:m
A(i,j) = s(i+j-1);
end
end
% Rearrange coefficients so that coefficient
% of x^(m-1) is first
coeff = flipdim(gaussPiv(A,b),1);
The problem is solved without a program as follows
MY ATTEMPT
T=[34.5,69,103.5,138];
D1=[.46,.95,1.48,1.93];
D2=[.34,1.02,1.51,2.09];
D3=[.73,1.1,1.62,2.12];
Mod1=T./D1;
Mod2=T./D2;
Mod3=T./D3;
xData=T;
yData1=Mod1;
yData2=Mod2;
yData3=Mod3;
coeff1 = polynFit(xData,yData1,2);
coeff2 = polynFit(xData,yData2,2);
coeff3 = polynFit(xData,yData3,2);
x1=(0:.5:190);
y1=coeff1(2)+coeff1(1)*x1;
subplot(1,3,1);
plot(x1,y1,xData,yData1,'o');
y2=coeff2(2)+coeff2(1)*x1;
subplot(1,3,2);
plot(x1,y2,xData,yData2,'o');
y3=coeff3(2)+coeff3(1)*x1;
subplot(1,3,3);
plot(x1,y3,xData,yData3,'o');
What do I have to do to get this result?
As a general advice:
avoid for loops wherever possible.
avoid using i and j as variable names, as they are Matlab built-in names for the imaginary unit (I really hope that disappears in a future release...)
Due to m being an interpreted language, for-loops can be very slow compared to their compiled alternatives. Matlab is named MATtrix LABoratory, meaning it is highly optimized for matrix/array operations. Usually, when there is an operation that cannot be done without a loop, Matlab has a built-in function for it that runs way way faster than a for-loop in Matlab ever will. For example: computing the mean of elements in an array: mean(x). The sum of all elements in an array: sum(x). The standard deviation of elements in an array: std(x). etc. Matlab's power comes from these built-in functions.
So, your problem. You have a linear regression problem. The easiest way in Matlab to solve this problem is this:
%# your data
stress = [ %# in Pa
34.5 69 103.5 138] * 1e6;
strain = [ %# in m/m
0.46 0.95 1.48 1.93
0.34 1.02 1.51 2.09
0.73 1.10 1.62 2.12]' * 1e-3;
%# make linear array for the data
yy = strain(:);
xx = repmat(stress(:), size(strain,2),1);
%# re-formulate the problem into linear system Ax = b
A = [xx ones(size(xx))];
b = yy;
%# solve the linear system
x = A\b;
%# modulus of elasticity is coefficient
%# NOTE: y-offset is relatively small and can be ignored)
E = 1/x(1)
What you did in the function polynFit is done by A\b, but the \-operator is capable of doing it way faster, way more robust and way more flexible than what you tried to do yourself. I'm not saying you shouldn't try to make these thing yourself (please keep on doing that, you learn a lot from it!), I'm saying that for the "real" results, always use the \-operator (and check your own results against it as well).
The backslash operator (type help \ on the command prompt) is extremely useful in many situations, and I advise you learn it and learn it well.
I leave you with this: here's how I would write your polynFit function:
function coeff = polynFit(X,Y,m)
if numel(X) ~= numel(X)
error('polynFit:size_mismathc',...
'number of elements in matrices X and Y must be equal.');
end
%# bad condition number, rank errors, etc. taken care of by \
coeff = bsxfun(#power, X(:), m:-1:0) \ Y(:);
end
I leave it up to you to figure out how this works.