I am new on MatLAB, i don't know proper basics of vectorization..
I am trying to vectorize this function.
function indc = PatchSearch(X, row, col, off, nv, S, I)
[N M] = size(I);
f2 = size(X,2);
rmin = max( row-S, 1 );
rmax = min( row+S, N );
cmin = max( col-S, 1 );
cmax = min( col+S, M );
idx = I(rmin:rmax, cmin:cmax);
idx = idx(:);
B = X(idx, :);
v = X(off, :);
dis = (B(:,1) - v(1)).^2;
for k = 2:f2
dis = dis + (B(:,k) - v(k)).^2;
end
dis = dis./f2;
[val,ind] = sort(dis);
indc = idx( ind(1:nv) );
%indc = idx(dis<250);
I need help from some experts for vectorizating this function
Thanks
You can replace the following loopy portion of your code -
dis = (B(:,1) - v(1)).^2;
for k = 2:f2
dis = dis + (B(:,k) - v(k)).^2;
end
With this bsxfun implementation -
dis = sum(bsxfun(#minus,B,v).^2,2);
The assumption here is that f2 is the number of columns in B, which is the same as the number of elements in v and looking at your code the way B and v are initialized, that seems to be quite right.
One possible solution using repmat and sum:
Replace the loop and the line above it with this code.
dis=sum((B-repmat(v,size(B,1),1)).^2,2);
Related
This is my Approximate entropy Calculator in MATLAB. https://en.wikipedia.org/wiki/Approximate_entropy
I'm not sure why it isn't working. It's returning a negative value.Can anyone help me with this? R1 being the data.
FindSize = size(R1);
N = FindSize(1);
% N = input ('insert number of data values');
%if you want to put your own N in, take away the % from the line above
and
%insert the % before the N = FindSize(1)
%m = input ('insert m: integer representing length of data, embedding
dimension ');
m = 2;
%r = input ('insert r: positive real number for filtering, threshold
');
r = 0.2*std(R1);
for x1= R1(1:N-m+1,1)
D1 = pdist2(x1,x1);
C11 = (D1 <= r)/(N-m+1);
c1 = C11(1);
end
for i1 = 1:N-m+1
s1 = sum(log(c1));
end
phi1 = (s1/(N-m+1));
for x2= R1(1:N-m+2,1)
D2 = pdist2(x2,x2);
C21 = (D2 <= r)/(N-m+2);
c2 = C21(1);
end
for i2 = 1:N-m+2
s2 = sum(log(c2));
end
phi2 = (s2/(N-m+2));
Ap = phi1 - phi2;
Apen = Ap(1)
Following the documentation provided by the Wikipedia article, I developed this small function that calculates the approximate entropy:
function res = approximate_entropy(U,m,r)
N = numel(U);
res = zeros(1,2);
for i = [1 2]
off = m + i - 1;
off_N = N - off;
off_N1 = off_N + 1;
x = zeros(off_N1,off);
for j = 1:off
x(:,j) = U(j:off_N+j);
end
C = zeros(off_N1,1);
for j = 1:off_N1
dist = abs(x - repmat(x(j,:),off_N1,1));
C(j) = sum(~any((dist > r),2)) / off_N1;
end
res(i) = sum(log(C)) / off_N1;
end
res = res(1) - res(2);
end
I first tried to replicate the computation shown the article, and the result I obtain matches the result shown in the example:
U = repmat([85 80 89],1,17);
approximate_entropy(U,2,3)
ans =
-1.09965411068114e-05
Then I created another example that shows a case in which approximate entropy produces a meaningful result (the entropy of the first sample is always less than the entropy of the second one):
% starting variables...
s1 = repmat([10 20],1,10);
s1_m = mean(s1);
s1_s = std(s1);
s2_m = 0;
s2_s = 0;
% datasample will not always return a perfect M and S match
% so let's repeat this until equality is achieved...
while ((s1_m ~= s2_m) && (s1_s ~= s2_s))
s2 = datasample([10 20],20,'Replace',true,'Weights',[0.5 0.5]);
s2_m = mean(s2);
s2_s = std(s2);
end
m = 2;
r = 3;
ae1 = approximate_entropy(s1,m,r)
ae2 = approximate_entropy(s2,m,r)
ae1 =
0.00138568170752751
ae2 =
0.680090884817465
Finally, I tried with your sample data:
fid = fopen('O1.txt','r');
U = cell2mat(textscan(fid,'%f'));
fclose(fid);
m = 2;
r = 0.2 * std(U);
approximate_entropy(U,m,r)
ans =
1.08567461184858
I am trying to implement my own LU decomposition with partial pivoting. My code is below and apparently is working fine, but for some matrices it gives different results when comparing with the built-in [L, U, P] = lu(A) function in matlab
Can anyone spot where is it wrong?
function [L, U, P] = lu_decomposition_pivot(A)
n = size(A,1);
Ak = A;
L = zeros(n);
U = zeros(n);
P = eye(n);
for k = 1:n-1
for i = k+1:n
[~,r] = max(abs(Ak(:,k)));
Ak([k r],:) = Ak([r k],:);
P([k r],:) = P([r k],:);
L(i,k) = Ak(i,k) / Ak(k,k);
for j = k+1:n
U(k,j-1) = Ak(k,j-1);
Ak(i,j) = Ak(i,j) - L(i,k)*Ak(k,j);
end
end
end
L(1:n+1:end) = 1;
U(:,end) = Ak(:,end);
return
Here are the two matrices I've tested with. The first one is correct, whereas the second has some elements inverted.
A = [1 2 0; 2 4 8; 3 -1 2];
A = [0.8443 0.1707 0.3111;
0.1948 0.2277 0.9234;
0.2259 0.4357 0.4302];
UPDATE
I have checked my code and corrected some bugs, but still there's something missing with the partial pivoting. In the first column the last two rows are always inverted (compared with the result of lu() in matlab)
function [L, U, P] = lu_decomposition_pivot(A)
n = size(A,1);
Ak = A;
L = eye(n);
U = zeros(n);
P = eye(n);
for k = 1:n-1
[~,r] = max(abs(Ak(k:end,k)));
r = n-(n-k+1)+r;
Ak([k r],:) = Ak([r k],:);
P([k r],:) = P([r k],:);
for i = k+1:n
L(i,k) = Ak(i,k) / Ak(k,k);
for j = 1:n
U(k,j) = Ak(k,j);
Ak(i,j) = Ak(i,j) - L(i,k)*Ak(k,j);
end
end
end
U(:,end) = Ak(:,end);
return
I forgot that If there was a swap in matrix P I had to swap also the matrix L. So just add the next line after after swapping P and everything will work excellent.
L([k r],:) = L([r k],:);
Both functions are not correct.
Here is the correct one.
function [L, U, P] = LU_pivot(A)
[m, n] = size(A); L=eye(n); P=eye(n); U=A;
for k=1:m-1
pivot=max(abs(U(k:m,k)))
for j=k:m
if(abs(U(j,k))==pivot)
ind=j
break;
end
end
U([k,ind],k:m)=U([ind,k],k:m)
L([k,ind],1:k-1)=L([ind,k],1:k-1)
P([k,ind],:)=P([ind,k],:)
for j=k+1:m
L(j,k)=U(j,k)/U(k,k)
U(j,k:m)=U(j,k:m)-L(j,k)*U(k,k:m)
end
pause;
end
end
My answer is here:
function [L, U, P] = LU_pivot(A)
[n, n1] = size(A); L=eye(n); P=eye(n); U=A;
for j = 1:n
[pivot m] = max(abs(U(j:n, j)));
m = m+j-1;
if m ~= j
U([m,j],:) = U([j,m], :); % interchange rows m and j in U
P([m,j],:) = P([j,m], :); % interchange rows m and j in P
if j >= 2; % very_important_point
L([m,j],1:j-1) = L([j,m], 1:j-1); % interchange rows m and j in columns 1:j-1 of L
end;
end
for i = j+1:n
L(i, j) = U(i, j) / U(j, j);
U(i, :) = U(i, :) - L(i, j)*U(j, :);
end
end
I am trying to recreate MATLAB's hough function with mine. My code follows
function [H,T,R] = my_hough(x,dr,dtheta)
rows = size(x,1);
cols = size(x,2);
D = sqrt((rows - 1)^2 + (cols - 1)^2);
Nr = 2*(ceil(D/dr)) + 1;
diagonal = dr*ceil(D/dr);
R = -diagonal:dr:diagonal;
T = -90:dtheta:90-dtheta;
Ntheta = length(T);
H = zeros(Nr,Ntheta);
for i = 1:Ntheta
for n1 = 1:rows
for n2 = 1:cols
if x(n1,n2)==1
r = n2*cos(T(i)*pi/180) + n1*sin(T(i)*pi/180);
[~,j] = min(abs(R-ones(1,Nr)*r));
H(j,i) = H(j,i) + 1;
end
end
end
end
end
where dr and dtheta are distance and angle resolution. Printing the difference between my Hough table and MATLAB's there are many zeros, but there are also some non-zero elements. Any idea why this is happening?
Well, actually it was a very silly mistake...
r = n2*cos(T(i)*pi/180) + n1*sin(T(i)*pi/180);
must be
r = (n2-1)*cos(T(i)*pi/180) + (n1-1)*sin(T(i)*pi/180);
Thanks to this weird MATLAB indexing.
I have two matrices of big sizes, which are something similar to the following matrices.
m; with size 1000 by 10
n; with size 1 by 10.
I would like to subtract each element of n from all elements of m to get ten different matrices, each has size of 1000 by 10.
I started as follows
clc;clear;
nrow = 10000;
ncol = 10;
t = length(n)
for i = 1:nrow;
for j = 1:ncol;
for t = 1:length(n);
m1(i,j) = m(i,j)-n(1);
m2(i,j) = m(i,j)-n(2);
m3(i,j) = m(i,j)-n(3);
m4(i,j) = m(i,j)-n(4);
m5(i,j) = m(i,j)-n(5);
m6(i,j) = m(i,j)-n(6);
m7(i,j) = m(i,j)-n(7);
m8(i,j) = m(i,j)-n(8);
m9(i,j) = m(i,j)-n(9);
m10(i,j) = m(i,j)-n(10);
end
end
end
can any one help me how can I do it without writing the ten equations inside the loop? Or can suggest me any convenient way especially when the two matrices has many columns.
Why can't you just do this:
m01 = m - n(1);
...
m10 = m - n(10);
What do you need the loop for?
Even better:
N = length(n);
m2 = cell(N, 1);
for k = 1:N
m2{k} = m - n(k);
end
Here we go loopless:
nrow = 10000;
ncol = 10;
%example data
m = ones(nrow,ncol);
n = 1:ncol;
M = repmat(m,1,1,ncol);
N = permute( repmat(n,nrow,1,ncol) , [1 3 2] );
result = bsxfun(#minus, M, N );
%or just
result = M-N;
Elapsed time is 0.018499 seconds.
or as recommended by Luis Mendo:
M = repmat(m,1,1,ncol);
result = bsxfun(#minus, m, permute(n, [1 3 2]) );
Elapsed time is 0.000094 seconds.
please make sure that your input vectors have the same orientation like in my example, otherwise you could get in trouble. You should be able to obtain that by transposements or you have to modify this line:
permute( repmat(n,nrow,1,ncol) , [1 3 2] )
according to your needs.
You mentioned in a comment that you want to count the negative elements in each of the obtained columns:
A = result; %backup results
A(A > 0) = 0; %set non-negative elements to zero
D = sum( logical(A),3 );
which will return the desired 10000x10 matrix with quantities of negative elements. (Please verify it, I may got a little confused with the dimensions ;))
Create the three dimensional result matrix. Store your results, for example, in third dimension.
clc;clear;
nrow = 10000;
ncol = 10;
N = length(n);
resultMatrix = zeros(nrow, ncol, N);
neg = zeros(ncol, N); % amount of negative values
for j = 1:ncol
for i = 1:nrow
for t = 1:N
resultMatrix(i,j,t) = m(i,j) - n(t);
end
end
for t = 1:N
neg(j,t) = length( find(resultMatrix(:,j,t) < 0) );
end
end
I asked a question a few days before but I guess it was a little too complicated and I don't expect to get any answer.
My problem is that I need to use ANN for classification. I've read that much better cost function (or loss function as some books specify) is the cross-entropy, that is J(w) = -1/m * sum_i( yi*ln(hw(xi)) + (1-yi)*ln(1 - hw(xi)) ); i indicates the no. data from training matrix X. I tried to apply it in MATLAB but I find it really difficult. There are couple things I don't know:
should I sum each outputs given all training data (i = 1, ... N, where N is number of inputs for training)
is the gradient calculated correctly
is the numerical gradient (gradAapprox) calculated correctly.
I have following MATLAB codes. I realise I may ask for trivial thing but anyway I hope someone can give me some clues how to find the problem. I suspect the problem is to calculate gradients.
Many thanks.
Main script:
close all
clear all
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
% theta = [10 -30 -30];
x = [0 0; 0 1; 1 0; 1 1];
y = [0.9 0.1 0.1 0.1]';
theta0 = 2*rand(9,1)-1;
options = optimset('gradObj','on','Display','iter');
thetaVec = fminunc(#costFunction,theta0,options,x,y);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
NN(x,theta)'
Cost function:
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
persistent index;
% 1 x x
% 1 x x
% 1 x x
% x = 1 x x
% 1 x x
% 1 x x
% 1 x x
m = size(x,1);
if isempty(index) || index > size(x,1)
index = 1;
end
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')];
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Dew = cell(2,1);
DewApprox = cell(2,1);
% Forward propagation
a0 = x(index,:)';
z1 = theta{1}*[1;a0];
a1 = L(z1);
z2 = theta{2}*[1;a1];
a2 = L(z2);
% Back propagation
d2 = 1/m*(a2 - y(index))*L(z2)*(1-L(z2));
Dew{2} = [1;a1]*d2;
d1 = [1;a1].*(1 - [1;a1]).*theta{2}'*d2;
Dew{1} = [1;a0]*d1(2:end)';
% NNRes = NN(x,theta)';
% jVal = -1/m*sum(NNRes-y)*NNRes*(1-NNRes);
jVal = -1/m*(a2 - y(index))*a2*(1-a2);
gradVal = [Dew{1}(:);Dew{2}(:)];
gradApprox = CalcGradApprox(0.0001);
index = index + 1;
function output = CalcGradApprox(epsilon)
output = zeros(size(gradVal));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a2min = NN(x(index,:),thetaMin)';
a2max = NN(x(index,:),thetaMax)';
jValMin = -1/m*(a2min-y(index))*a2min*(1-a2min);
jValMax = -1/m*(a2max-y(index))*a2max*(1-a2max);
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
EDIT:
Below I present the correct version of my costFunction for those who may be interested.
function [jVal,gradVal,gradApprox] = costFunction(thetaVec,x,y)
m = size(x,1);
L = #(x) (1 + exp(-x)).^(-1);
NN = #(x,theta) L(theta{2}*[ones(1,size(x,1));L(theta{1}*[ones(size(x,1),1) x]')]);
theta = cell(2,1);
theta{1} = reshape(thetaVec(1:6),[2 3]);
theta{2} = reshape(thetaVec(7:9),[1 3]);
Delta = cell(2,1);
Delta{1} = zeros(size(theta{1}));
Delta{2} = zeros(size(theta{2}));
D = cell(2,1);
D{1} = zeros(size(theta{1}));
D{2} = zeros(size(theta{2}));
jVal = 0;
for in = 1:size(x,1)
% Forward propagation
a1 = [1;x(in,:)']; % added bias to a0
z2 = theta{1}*a1;
a2 = [1;L(z2)]; % added bias to a1
z3 = theta{2}*a2;
a3 = L(z3);
% Back propagation
d3 = a3 - y(in);
d2 = theta{2}'*d3.*a2.*(1 - a2);
Delta{2} = Delta{2} + d3*a2';
Delta{1} = Delta{1} + d2(2:end)*a1';
jVal = jVal + sum( y(in)*log(a3) + (1-y(in))*log(1-a3) );
end
D{1} = 1/m*Delta{1};
D{2} = 1/m*Delta{2};
jVal = -1/m*jVal;
gradVal = [D{1}(:);D{2}(:)];
gradApprox = CalcGradApprox(x(in,:),0.0001);
% Nested function to calculate gradApprox
function output = CalcGradApprox(x,epsilon)
output = zeros(size(thetaVec));
for n=1:length(thetaVec)
thetaVecMin = thetaVec;
thetaVecMax = thetaVec;
thetaVecMin(n) = thetaVec(n) - epsilon;
thetaVecMax(n) = thetaVec(n) + epsilon;
thetaMin = cell(2,1);
thetaMax = cell(2,1);
thetaMin{1} = reshape(thetaVecMin(1:6),[2 3]);
thetaMin{2} = reshape(thetaVecMin(7:9),[1 3]);
thetaMax{1} = reshape(thetaVecMax(1:6),[2 3]);
thetaMax{2} = reshape(thetaVecMax(7:9),[1 3]);
a3min = NN(x,thetaMin)';
a3max = NN(x,thetaMax)';
jValMin = 0;
jValMax = 0;
for inn=1:size(x,1)
jValMin = jValMin + sum( y(inn)*log(a3min) + (1-y(inn))*log(1-a3min) );
jValMax = jValMax + sum( y(inn)*log(a3max) + (1-y(inn))*log(1-a3max) );
end
jValMin = 1/m*jValMin;
jValMax = 1/m*jValMax;
output(n) = (jValMax - jValMin)/2/epsilon;
end
end
end
I've only had a quick eyeball over your code. Here are some pointers.
Q1
should I sum each outputs given all training data (i = 1, ... N, where
N is number of inputs for training)
If you are talking in relation to the cost function, it is normal to sum and normalise by the number of training examples in order to provide comparison between.
I can't tell from the code whether you have a vectorised implementation which will change the answer. Note that the sum function will only sum up a single dimension at a time - meaning if you have a (M by N) array, sum will result in a 1 by N array.
The cost function should have a scalar output.
Q2
is the gradient calculated correctly
The gradient is not calculated correctly - specifically the deltas look wrong. Try following Andrew Ng's notes [PDF] they are very good.
Q3
is the numerical gradient (gradAapprox) calculated correctly.
This line looks a bit suspect. Does this make more sense?
output(n) = (jValMax - jValMin)/(2*epsilon);
EDIT: I actually can't make heads or tails of your gradient approximation. You should only use forward propagation and small tweaks in the parameters to compute the gradient. Good luck!