Karhunen Loeve Procedure - matlab

I'm trying to apply the Karhunen Loeve procedure to a translation-invariant data set. I understand the KL procedure, and how to create a mask to smooth out missing data. However, I'm having a hard time creating a program to model my Translation invariant data set.
The data set that I need to plot in matlab is:
Translation-Invariant Data Set
And here's the matlab code that I tried to use to model it:
function [fmu] = kLProcedure(N, P, M)
for k = 1:N;
for m = 1:M;
for n = 1:P;
x(m) = ((m-1)*2.*(pi))/M;
t(n) = ((n - 1)*2.*(pi))/P;
k = 1:3;
fmu(x(m),t(n)) = (1/N).*symsum((1/k).*sin(k(x(m)-t(n))),k);
end
end
end
With N=3, P=64, M=64;
I'm trying to use a nested for-loop to calculate each iteration of m, n, and t. and keep getting the error:
Error using /
Matrix dimensions must agree.
Error in kLProcedure (line 28)
fmu(x(m),t(n)) = (1/N).*symsum((1/k).*sin(k(x(m)-t(n))),k);
And advice would be greatly appreciated. Thank you.

Honestly, I don't have an in-depth knowledge of the algorithms you are using, but looking at the formulation of the one you are trying to reproduce, I think the following code is what you are looking for:
fmu = kLProcedure(3,64,64);
plot(fmu);
function [fmu] = kLProcedure(N,M,P)
k = 1:N;
ki = 1 ./ k;
Ni = 1 / N;
m = 1:M;
x = ((m - 1) .* 2 .* pi()) ./ M;
n = 1:P;
t = ((n - 1) .* 2 .* pi()) ./ P;
fmu = zeros(M,P);
for i = m
for j = n
fmu(i,j) = Ni .* sum(ki .* sin(ki .* (x(i) - t(j))));
end
end
end
Output:

Translational-Invariant Data
function [fmu] = kLProcedure(N,M,P)
m = 1:M;
x = ((m - 1) .* 2 .* pi()) ./ M;
n = 1:P;
t = ((n - 1) .* 2 .* pi()) ./ P;
fmu = zeros(M,P);
for i = m
for j = n
fmu(i,j) = 0;
for k = 1:3
fmu(i,j) = fmu(i,j) + (1/3).*sum((1/k).*(sin((k) .* (x(i) - t(j)))));
end
end
end

Related

MATLAB find the average time using tic toc

Construct an experiment to study the performance of the Cramer rule (with two implementations
determinants) in relation to Gauss's algorithm.
In each iteration 10 random arrays A (NxN), and vectors b (Nx1) will be created.
The 10 linear systems will be solved using the Cramer rule ("cramer.m") using
of rec_det (A) and using det (A), and the Gaussian algorithm
(“GaussianElimination.m”), and the time for each technique will be the average of 10 values.
Repeat the above for N = 2 to 10 and make a graph of the average time
in relation to the dimension N.
This is my task. I dont know if the way that I calculate the average time is correct and the graphic is not displayed.
T1=0;
T2=0;
T3=0;
for N=2:10
for i=1:10
A=rand(N,N);
b=rand(N,1);
t1=[1,i];
t2=[1,i];
t3=[1,i];
tic;
crammer(A,b);
t1(i)=toc;
tic
crammer_rec(A,b);
t2(i)=toc;
tic
gaussianElimination(A,b);
t3(i)=toc;
T1=T1+t1(i);
T2=T2+t2(i);
T3=T3+t3(i);
end
avT1=T1/10;
avT2=T2/10;
avT3=T3/10;
end
plot(2:10 , avT1 , 2:10 , avT2 , 2:10 , avT3);
function x = cramer(A, b)
n = length(b);
d = det(A);
% d = rec_det(A);
x = zeros(n, 1);
for j = 1:n
x(j) = det([A(:,1:j-1) b A(:,j+1:end)]) / d;
% x(j) = rec_det([A(:,1:j-1) b A(:,j+1:end)]) / d;
end
end
function x = cramer(A, b)
n = length(b);
d = rec_det(A);
x = zeros(n, 1);
for j = 1:n
x(j) = rec_det([A(:,1:j-1) b A(:,j+1:end)]) / d;
end
end
function deta = rec_det(R)
if size(R,1)~=size(R,2)
error('Error.Matrix must be square.')
else
n = size(R,1);
if ( n == 2 )
deta=(R(1,1)*R(2,2))-(R(1,2)*R(2,1));
else
for i=1:n
deta_temp=R;
deta_temp(1,:)=[ ];
deta_temp(:,i)=[ ];
if i==1
deta=(R(1,i)*((-1)^(i+1))*rec_det(deta_temp));
else
deta=deta+(R(1,i)*((-1)^(i+1))*rec_det(deta_temp));
end
end
end
end
end
function x = gaussianElimination(A, b)
[m, n] = size(A);
if m ~= n
error('Matrix A must be square!');
end
n1 = length(b);
if n1 ~= n
error('Vector b should be equal to the number of rows and columns of A!');
end
Aug = [A b]; % build the augmented matrix
C = zeros(1, n + 1);
% elimination phase
for k = 1:n - 1
% ensure that the pivoting point is the largest in its column
[pivot, j] = max(abs(Aug(k:n, k)));
C = Aug(k, :);
Aug(k, :) = Aug(j + k - 1, :);
Aug(j + k - 1, :) = C;
if Aug(k, k) == 0
error('Matrix A is singular');
end
for i = k + 1:n
r = Aug(i, k) / Aug(k, k);
Aug(i, k:n + 1) = Aug(i, k:n + 1) - r * Aug(k, k: n + 1);
end
end
% back substitution phase
x = zeros(n, 1);
x(n) = Aug(n, n + 1) / Aug(n, n);
for k = n - 1:-1:1
x(k) = (Aug(k, n + 1) - Aug(k, k + 1:n) * x(k + 1:n)) / Aug(k, k);
end
end
I think the easiest way to do this is by creating a 9 * 3 dimensional matrix to contain all the total times, and then take the average at the end.
allTimes = zeros(9, 3);
for N=2:10
for ii=1:10
A=rand(N,N);
b=rand(N,1);
tic;
crammer(A,b);
temp = toc;
allTimes(N-1,1) = allTimes(N-1,1) + temp;
tic
crammer_rec(A,b);
temp = toc;
allTimes(N-1,2) = allTimes(N-1,2) + temp;
tic
gaussianElimination(A,b);
temp = toc;
allTimes(N-1,3) = allTimes(N-1,3) + temp;
end
end
allTimes = allTimes/10;
figure; plot(2:10, allTimes);
You can use this approach because the numbers are quite straightforward and simple. If you had a more complicated setup, the way to store the times/calculate the averages would have to be tweaked.
If you had more functions you could also use function handles and create a third inner loop, but this is a little more advanced.

Simplifying function by removing a loop

What would be the best way to simplify a function by getting rid of a loop?
function Q = gs(f, a, b)
X(4) = sqrt((3+2*sqrt(6/5))/7);
X(3) = sqrt((3-2*sqrt(6/5))/7);
X(2) = -sqrt((3-2*sqrt(6/5))/7);
X(1) = -sqrt((3+2*sqrt(6/5))/7);
W(4) = (18-sqrt(30))/36;
W(3) = (18+sqrt(30))/36;
W(2) = (18+sqrt(30))/36;
W(1) = (18-sqrt(30))/36;
Q = 0;
for i = 1:4
W(i) = (W(i)*(b-a))/2;
X(i) = ((b-a)*X(i)+(b+a))/2;
Q = Q + W(i) * f(X(i));
end
end
Is there any way to use any vector-like solution instead of a for loop?
sum is your best friend here. Also, declaring some constants and creating vectors is useful:
function Q = gs(f, a, b)
c = sqrt((3+2*sqrt(6/5))/7);
d = sqrt((3-2*sqrt(6/5))/7);
e = (18-sqrt(30))/36;
g = (18+sqrt(30))/36;
X = [-c -d d c];
W = [e g g e];
W = ((b - a) / 2) * W;
X = ((b - a)*X + (b + a)) / 2;
Q = sum(W .* f(X));
end
Note that MATLAB loves to handle element-wise operations, so the key is to replace the for loop at the end with scaling all of the elements in W and X with those scaling factors seen in your loop. In addition, using the element-wise multiplication (.*) is key. This of course assumes that f can handle things in an element-wise fashion. If it doesn't, then there's no way to avoid the for loop.
I would highly recommend you consult the MATLAB tutorial on element-wise operations before you venture onwards on your MATLAB journey: https://www.mathworks.com/help/matlab/matlab_prog/array-vs-matrix-operations.html

MATLAB sparse matrices: Gauss Seidel and power method using a sparse matrix with CSR (Compressed Sparse Row)

this is my first time here so I hope that someone can help me.
I'm trying to implementing the Gauss-Seidel method and the power method using a matrix with the storage CSR or called Morse storage. Unfortunately I can't manage to do better then the following codes:
GS-MORSE:
function [y] = gs_morse(aa, diag, col, row, nmax, tol)
[n, n] = size(A);
y = [1, 1, 1, 1];
m = 1;
while m < nmax,
for i = 1: n,
k1 = row(i);
k2 = row(i + 1) - 1;
for k = k1: k2,
y(i) = y(i) + aa(k) * x(col(k));
y(col(k)) = y(col(k)) + aa(k) * diag(i);
end
k2 = k2 + 1;
y(i) = y(i) + aa(k) * diag(i);
end
if (norm(y - x)) < tol
disp(y);
end
m = m + 1;
for i = 1: n,
x(i) = y(i);
end
end
POWER-MORSE:
I was able only to implement the power method but I don't understand how to use the former matrix... so my code for power method is:
function [y, l] = potencia_iterada(A, v)
numiter=100;
eps=1e-10;
x = v(:);
y = x/norm(x);
l = 0;
for k = 1: numiter,
x = A * y;
y = x / norm(x);
l0 = x.' * y;
if abs(l0) < eps
return
end
l = l0;
end
Please anyone can help me for completing these codes or can explain me how can I do that? I really don't understand how to do. Thank you very much

how to remove array index out of bound error in matlab

here is my code where i have one error regarding array index out out of bound. plzz help me to rectify it
I = imread('E:\degraded images\village.jpg');
imshow(I)
I = im2double(I);
I = log(1 + I);
M = 2*size(I,1) + 1;
N = 2*size(I,2) + 1;
sigma = 10;
[X, Y] = meshgrid(1:N,1:M);
centerX = ceil(N/2);
centerY = ceil(M/2);
gaussianNumerator = (X - centerX).^2 + (Y - centerY).^2;
H = exp(-gaussianNumerator./(2*sigma.^2));
H = 1 - H;
imshow(H,'InitialMagnification',25)
H = fftshift(H);
If = fft2(I, M, N);
Iout = real(ifft2(H.*If)); ** here the code has error . ??? Error using ==> times Number of array dimensions must match for binary array op.**
H is 2-D while If is 3-D. You can use repmat with H or subset If. I don't know which one is correct for your situation. For instance,
rempat( H, [1, 1, 3 ] ) .* If;
or
H .* If(:,:,ind); % ind is the index of the 2-D array you want to subset

Regularized logistic regression code in matlab

I'm trying my hand at regularized LR, simple with this formulas in matlab:
The cost function:
J(theta) = 1/m*sum((-y_i)*log(h(x_i)-(1-y_i)*log(1-h(x_i))))+(lambda/2*m)*sum(theta_j)
The gradient:
∂J(theta)/∂theta_0 = [(1/m)*(sum((h(x_i)-y_i)*x_j)] if j=0
∂j(theta)/∂theta_n = [(1/m)*(sum((h(x_i)-y_i)*x_j)]+(lambda/m)*(theta_j) if j>1
This is not matlab code is just the formula.
So far I've done this:
function [J, grad] = costFunctionReg(theta, X, y, lambda)
J = 0;
grad = zeros(size(theta));
temp_theta = [];
%cost function
%get the regularization term
for jj = 2:length(theta)
temp_theta(jj) = theta(jj)^2;
end
theta_reg = lambda/(2*m)*sum(temp_theta);
temp_sum =[];
%for the sum in the cost function
for ii =1:m
temp_sum(ii) = -y(ii)*log(sigmoid(theta'*X(ii,:)'))-(1-y(ii))*log(1-sigmoid(theta'*X(ii,:)'));
end
tempo = sum(temp_sum);
J = (1/m)*tempo+theta_reg;
%regulatization
%theta 0
reg_theta0 = 0;
for jj=1:m
reg_theta0(jj) = (sigmoid(theta'*X(m,:)') -y(jj))*X(jj,1)
end
reg_theta0 = (1/m)*sum(reg_theta0)
grad_temp(1) = reg_theta0
%for the rest of thetas
reg_theta = [];
thetas_sum = 0;
for ii=2:size(theta)
for kk =1:m
reg_theta(kk) = (sigmoid(theta'*X(m,:)') - y(kk))*X(kk,ii)
end
thetas_sum(ii) = (1/m)*sum(reg_theta)+(lambda/m)*theta(ii)
reg_theta = []
end
for i=1:size(theta)
if i == 1
grad(i) = grad_temp(i)
else
grad(i) = thetas_sum(i)
end
end
end
And the cost function is giving correct results, but I have no idea why the gradient (one step) is not, the cost gives J = 0.6931 which is correct and the gradient grad = 0.3603 -0.1476 0.0320, which is not, the cost starts from 2 because the parameter theta(1) does not have to be regularized, any help? I guess there is something wrong with the code, but after 4 days I can't see it.Thanks
Vectorized:
function [J, grad] = costFunctionReg(theta, X, y, lambda)
hx = sigmoid(X * theta);
m = length(X);
J = (sum(-y' * log(hx) - (1 - y')*log(1 - hx)) / m) + lambda * sum(theta(2:end).^2) / (2*m);
grad =((hx - y)' * X / m)' + lambda .* theta .* [0; ones(length(theta)-1, 1)] ./ m ;
end
I used more variables, so you could see clearly what comes from the regular formula, and what comes from "the regularization cost added". Additionally, It is a good practice to use "vectorization" instead of loops in Matlab/Octave. By doing this, you guarantee a more optimized solution.
function [J, grad] = costFunctionReg(theta, X, y, lambda)
%Hypotheses
hx = sigmoid(X * theta);
%%The cost without regularization
J_partial = (-y' * log(hx) - (1 - y)' * log(1 - hx)) ./ m;
%%Regularization Cost Added
J_regularization = (lambda/(2*m)) * sum(theta(2:end).^2);
%%Cost when we add regularization
J = J_partial + J_regularization;
%Grad without regularization
grad_partial = (1/m) * (X' * (hx -y));
%%Grad Cost Added
grad_regularization = (lambda/m) .* theta(2:end);
grad_regularization = [0; grad_regularization];
grad = grad_partial + grad_regularization;
Finally got it, after rewriting it again like for the 4th time, this is the correct code:
function [J, grad] = costFunctionReg(theta, X, y, lambda)
J = 0;
grad = zeros(size(theta));
temp_theta = [];
for jj = 2:length(theta)
temp_theta(jj) = theta(jj)^2;
end
theta_reg = lambda/(2*m)*sum(temp_theta);
temp_sum =[];
for ii =1:m
temp_sum(ii) = -y(ii)*log(sigmoid(theta'*X(ii,:)'))-(1-y(ii))*log(1-sigmoid(theta'*X(ii,:)'));
end
tempo = sum(temp_sum);
J = (1/m)*tempo+theta_reg;
%regulatization
%theta 0
reg_theta0 = 0;
for i=1:m
reg_theta0(i) = ((sigmoid(theta'*X(i,:)'))-y(i))*X(i,1)
end
theta_temp(1) = (1/m)*sum(reg_theta0)
grad(1) = theta_temp
sum_thetas = []
thetas_sum = []
for j = 2:size(theta)
for i = 1:m
sum_thetas(i) = ((sigmoid(theta'*X(i,:)'))-y(i))*X(i,j)
end
thetas_sum(j) = (1/m)*sum(sum_thetas)+((lambda/m)*theta(j))
sum_thetas = []
end
for z=2:size(theta)
grad(z) = thetas_sum(z)
end
% =============================================================
end
If its helps anyone, or anyone has any comments on how can I do it better. :)
Here is an answer that eliminates the loops
m = length(y); % number of training examples
predictions = sigmoid(X*theta);
reg_term = (lambda/(2*m)) * sum(theta(2:end).^2);
calcErrors = -y.*log(predictions) - (1 -y).*log(1-predictions);
J = (1/m)*sum(calcErrors)+reg_term;
% prepend a 0 column to our reg_term matrix so we can use simple matrix addition
reg_term = [0 (lambda*theta(2:end)/m)'];
grad = sum(X.*(predictions - y)) / m + reg_term;