Vectorizing 4 nested for loops - matlab

I'm trying to vectorize the 2 inner nested for loops, but I can't come up with a way to do this. The FS1 and FS2 functions have been written to accept argument for N_theta and N_e, which is what the loops are iterating over
%% generate regions
for raw_r=1:visual_field_width
for raw_c=1:visual_field_width
r = raw_r - center_r;
c = raw_c - center_c;
% convert (r,c) to polar: (eccentricity, angle)
e = sqrt(r^2+c^2)*deg_per_pixel;
a = mod(atan2(r,c),2*pi);
for nt=1:N_theta
for ne=1:N_e
regions(raw_r, raw_c, nt, ne) = ...
FS_1(nt-1,a,N_theta) * ...
FS_2(ne-1,e,N_e,e0_in_deg, e_max);
end
end
end
end
Ideally, I could replace the two inner nested for loops by:
regions(raw_r,raw_c,:,:) = FS_1(:,a,N_theta) * FS_2(:,N_e,e0_in_deg,e_max);
But this isn't possible. Maybe I'm missing an easy fix or vectorization technique? e0_in_deg and e_max are parameters.
The FS_1 function is
function h = FS_1(n,theta,N,t)
if nargin==2
N = 9;
t=1/2;
elseif nargin==3
t=1/2;
end
w = (2*pi)/N;
theta = theta + w/4;
if n==0 && theta>(3/2)*pi
theta = theta - 2*pi;
end
h = FS_f((theta - (w*n + 0.5*w*(1-t)))/w);
the FS_2 function is
function g = FS_gne(n,e,N,e0, e_max)
if nargin==2
N = 10;
e0 = .5;
elseif nargin==3
e0 = .5;
end
w = (log(e_max) - log(e0))/N;
g = FS_f((log(e)-log(e0)-w*(n+1))/w);
and the FS_f function is
function f = FS_f(x, t)
if nargin<2
t = 0.5;
end
f = zeros(size(x));
% case 1
idx = x>-(1+t)/2 & x<=(t-1)/2;
f(idx) = (cos(0.5*pi*((x(idx)-(t-1)/2)/t))).^2;
% case 2
idx = x>(t-1)/2 & x<=(1-t)/2;
f(idx) = 1;
% case 3
idx = x>(1-t)/2 & x<=(1+t)/2;
f(idx) = -(cos(0.5*pi*((x(idx)-(1+t)/2)/t))).^2+1;

I had to assume values for the constants, and then used ndgrid to find the possible configurations and sub2ind to get the indices. Doing this I removed all loops. Let me know if this produced the correct values.
function RunningFunction
%% generate regions
visual_field_width = 10;
center_r = 2;
center_c = 3;
deg_per_pixel = 17;
N_theta = 2;
N_e = 5;
e0_in_deg = 35;
e_max = 17;
[raw_r, raw_c, nt, ne] = ndgrid(1:visual_field_width, 1:visual_field_width, 1:N_theta, 1:N_e);
ind = sub2ind(size(raw_r), raw_r, raw_c, nt, ne);
r = raw_r - center_r;
c = raw_c - center_c;
% convert (r,c) to polar: (eccentricity, angle)
e = sqrt(r.^2+c.^2)*deg_per_pixel;
a = mod(atan2(r,c),2*pi);
regions(ind) = ...
FS_1(nt-1,a,N_theta) .* ...
FS_2(ne-1,e,N_e,e0_in_deg, e_max);
regions = reshape(regions, size(raw_r));
end
function h = FS_1(n,theta,N,t)
if nargin==2
N = 9;
t=1/2;
elseif nargin==3
t=1/2;
end
w = (2*pi)./N;
theta = theta + w/4;
theta(n==0 & theta>(3/2)*pi) = theta(n==0 & theta>(3/2)*pi) - 2*pi;
h = FS_f((theta - (w*n + 0.5*w*(1-t)))/w);
end
function g = FS_2(n,e,N,e0, e_max)
if nargin==2
N = 10;
e0 = .5;
elseif nargin==3
e0 = .5;
end
w = (log(e_max) - log(e0))/N;
g = FS_f((log(e)-log(e0)-w*(n+1))/w);
end
function f = FS_f(x, t)
if nargin<2
t = 0.5;
end
f = zeros(size(x));
% case 1
idx = x>-(1+t)/2 & x<=(t-1)/2;
f(idx) = (cos(0.5*pi*((x(idx)-(t-1)/2)/t))).^2;
% case 2
idx = x>(t-1)/2 & x<=(1-t)/2;
f(idx) = 1;
% case 3
idx = x>(1-t)/2 & x<=(1+t)/2;
f(idx) = -(cos(0.5*pi*((x(idx)-(1+t)/2)/t))).^2+1;
end

Related

Steepest Descent using Armijo rule

I want to determine the Steepest descent of the Rosenbruck function using Armijo steplength where x = [-1.2, 1]' (the initial column vector).
The problem is, that the code has been running for a long time. I think there will be an infinite loop created here. But I could not understand where the problem was.
Could anyone help me?
n=input('enter the number of variables n ');
% Armijo stepsize rule parameters
x = [-1.2 1]';
s = 10;
m = 0;
sigma = .1;
beta = .5;
obj=func(x);
g=grad(x);
k_max = 10^5;
k=0; % k = # iterations
nf=1; % nf = # function eval.
x_new = zeros([],1) ; % empty vector which can be filled if length is not known ;
[X,Y]=meshgrid(-2:0.5:2);
fx = 100*(X.^2 - Y).^2 + (X-1).^2;
contour(X, Y, fx, 20)
while (norm(g)>10^(-3)) && (k<k_max)
d = -g./abs(g); % steepest descent direction
s = 1;
newobj = func(x + beta.^m*s*d);
m = m+1;
if obj > newobj - (sigma*beta.^m*s*g'*d)
t = beta^m *s;
x = x + t*d;
m_new = m;
newobj = func(x + t*d);
nf = nf+1;
else
m = m+1;
end
obj=newobj;
g=grad(x);
k = k + 1;
x_new = [x_new, x];
end
% Output x and k
x_new, k, nf
fprintf('Optimal Solution x = [%f, %f]\n', x(1), x(2))
plot(x_new)
function y = func(x)
y = 100*(x(1)^2 - x(2))^2 + (x(1)-1)^2;
end
function y = grad(x)
y(1) = 100*(2*(x(1)^2-x(2))*2*x(1)) + 2*(x(1)-1);
end

How to display a function with double values instead of symbolic?

I want the function P to look like this:
-1 + 0.6366*(x+pi/2) + (-0.000)*(x + pi/2)*(x)
and right now it looks like this
(5734161139222659*x)/9007199254740992 + (5734161139222659*pi)/18014398509481984 - (8131029572207409*x*(x + pi/2))/324518553658426726783156020576256 - 1.
How to convert S array so that the values are not symbolic?
syms P x
f = sin(x);
f = matlabFunction(f);
X = [-pi/2, 0, pi/2];
Y = f(sym(X));
P = MetN(X,Y,x)
P = matlabFunction(P);
function [P] = MetN(X,Y,x)
n = length(X);
for i = 1:n
A(i,1) = 1;
end
for i = 2:n
for j = 2: n
if i >= j
produs = 1;
for k =1:j-1
produs = produs * (X(i) - X(k));
end
A(i,j) = produs;
end
end
end
S = SubsAsc(A, Y);
S = double(S);
disp(S);
sym produs
P = double(sym(S(1)));
for i = 2:n
produs = 1;
for j = 1:i-1
produs = produs * (x - sym(X(j)));
end
disp(produs);
P = P + double(S(i))*produs;
end
end
function [x] = SubsAsc(A,b)
n = length(b);
x(1) = (1/A(1,1))*b(1);
for k = 2:n
s = 0;
for j = 1:k-1
s = s + A(k,j)*x(j);
end
x(k) = (1/A(k,k))*(b(k)-s);
end
end
The output you currently have is because symbolic uses exact arithmetic, so it outputs it as a rational number (hence the ugly fraction).
To have it output P using decimals, use vpa(). For instance output P using decimals to 5 significant digits
>> vpa(P, 5)
ans =
0.63662*x - 2.5056e-17*x*(x + 1.5708)
This will, however, also round pi, so you can't really have the best of both worlds here.

Finding the distribution of '1' in rows/columns from a parity-check matrix

My question this time concerns the obtention of the degree distribution of a LDPC matrix through linear programming, under the following statement:
My code is the following:
function [v] = LP_Irr_LDPC(k,Ebn0)
options = optimoptions('fmincon','Display','iter','Algorithm','interior-point','MaxIter', 4000, 'MaxFunEvals', 70000);
fun = #(v) -sum(v(1:k)./(1:k));
A = [];
b = [];
Aeq = [0, ones(1,k-1)];
beq = 1;
lb = zeros(1,k);
ub = [0, ones(1,k-1)];
nonlcon = #(v)DensEv_SP(v,Ebn0);
l0 = [0 rand(1,k-1)];
l0 = l0./sum(l0);
v = fmincon(fun,l0,A,b,Aeq,beq,lb,ub,nonlcon,options)
end
Definition of nonlinear constraints:
function [c, ceq] = DensEv_SP(v,Ebn0)
% It is also needed to modify this function, as you cannot pass parameters from others to it.
h = [0 rand(1,19)];
h = h./sum(h); % This is where h comes from
syms x;
X = x.^(0:(length(h)-1));
R = h*transpose(X);
ebn0 = 10^(Ebn0/10);
Rm = 1;
LLR = (-50:50);
p03 = 0.3;
LLR03 = log((1-p03)/p03);
r03 = 1 - p03;
noise03 = (2*r03*Rm*ebn0)^-1;
pf03 = normpdf(LLR, LLR03, noise03);
sumpf03 = sum(pf03(1:length(pf03)/2));
divisions = 100;
Aj = zeros(1, divisions);
rho = zeros(1, divisions);
xj = zeros(1, divisions);
k = 10; % Length(v) -> Same value as in 'Complete.m'
for j=1:1:divisions
xj(j) = sumpf03*j/divisions;
rho(j) = subs(R,x,1-xj(j));
Aj(j) = 1 - rho(j);
end
c = zeros(1, length(xj));
lambda = zeros(1, length(Aj));
for j = 1:1:length(xj)
lambda(j) = sum(v(2:k).*(Aj(j).^(1:(k-1))));
c(j) = sumpf03*lambda(j) - xj(j);
end
save Almacen
ceq = [];
%ceq = sum(v)-1;
end
This question is linked to the one posted here. My problem is that I need that each element from vectors v and h resulting from this optimization problem is a fraction of x/N and x/(N(1-r) respectively.
How could I ensure that condition without losing convergence capability?
I came up with one possible solution, at least for vector h, within the function DensEv_SP:
function [c, ceq] = DensEv_SP(v,Ebn0)
% It is also needed to modify this function, as you cannot pass parameters from others to it.
k = 10; % Same as in Complete.m, desired sum of h
M = 19; % Number of integers
h = [0 diff([0,sort(randperm(k+M-1,M-1)),k+M])-ones(1,M)];
h = h./sum(h);
syms x;
X = x.^(0:(length(h)-1));
R = h*transpose(X);
ebn0 = 10^(Ebn0/10);
Rm = 1;
LLR = (-50:50);
p03 = 0.3;
LLR03 = log((1-p03)/p03);
r03 = 1 - p03;
noise03 = (2*r03*Rm*ebn0)^-1;
pf03 = normpdf(LLR, LLR03, noise03);
sumpf03 = sum(pf03(1:length(pf03)/2));
divisions = 100;
Aj = zeros(1, divisions);
rho = zeros(1, divisions);
xj = zeros(1, divisions);
N = 20; % Length(v) -> Same value as in 'Complete.m'
for j=1:1:divisions
xj(j) = sumpf03*j/divisions;
rho(j) = subs(R,x,1-xj(j));
Aj(j) = 1 - rho(j);
end
c = zeros(1, length(xj));
lambda = zeros(1, length(Aj));
for j = 1:1:length(xj)
lambda(j) = sum(v(2:k).*(Aj(j).^(1:(k-1))));
c(j) = sumpf03*lambda(j) - xj(j);
end
save Almacen
ceq = (N*v)-floor(N*v);
%ceq = sum(v)-1;
end
As above stated, there is no longer any problem with vector h; nevertheless the way I defined ceq value seemed to be insufficient to make the optimization work out (the problems with v have not diminished at all). Does anybody know how to find the solution?

how to Improve the speed matlab

This is my matlab code. It runs too slow and I had no clue how to improve it.
Could you help me to improve the speed?
What I would like to do is to create some random points and then remove the random points to make them similar to my target points.
syms Dx Dy p q;
a = 0;
num = 10;
x = rand(1,num);
y = rand(1,num);
figure(1)
scatter(x,y,'.','g')
%num_x = xlsread('F:\bin\test_2');% num 1024
%figure(2)
%scatter(num_x(:,1),num_x(:,2),'.','r');
q = 0;
num_q = 10;
x_q = randn(1,num_q);
y_q = randn(1,num_q);
%figure(2)
hold on;
scatter(x_q,y_q,'.','r')
for i = 1:num_q;
for j = 1:num_q;
qx(i,j) = x_q(i) - x_q(j);
qy(i,j) = y_q(i) - y_q(j);
%qx(i,j) = num_x(i,1) - num_x(j,1);
%qy(i,j) = num_x(i,2) - num_x(j,2);
%d~(s(i),s(j))
if ((qx(i,j))^2+(qy(i,j)^2))> 0.01 % find neighbours
qx(i,j) = 0;
qy(i,j) = 0;
end
end
end
for i = 1:num_q;
for j = 1:num_q;
if qx(i,j)>0&&qy(i,j)>0
q = q + exp(-(((Dx - qx(i,j))^2)+((Dy - qy(i,j))^2))/4);%exp(-(((Dx - qx(i,j))^2)+((Dy - qy(i,j))^2))/4);
end
end
end
%I = ones(num,num); % I(s) should from a grayscale image
%r = 1./sqrt(I);
for s = 1:100;
for i = 1:num;
for j = 1:num;
dx(i,j) = x(i) - x(j);
dy(i,j) = y(i) - y(j);
%d~(s(i),s(j))
if ((dx(i,j))^2+(dy(i,j)^2))> 0.05 % delta p, find neighbours
dx(i,j) = 0;
dy(i,j) = 0;
end
end
end
p = 0;
for i = 1:num;
for j = 1:num;
if dx(i,j)>0&&dy(i,j)>0
p = p + exp(-(((Dx - dx(i,j))^2)+((Dy - dy(i,j))^2))/4);
end
end
end
p = p - q;
sum = 0;
for i = 1:num;
for j = 1:num;
if dx(i,j)>0&&dy(i,j)>0;
kx(i,j) = (1/2)*(Dx-dx(i,j))*exp((-(Dx-dx(i,j))^2+(Dy-dy(i,j))^2)/4);
ky(i,j) = (1/2)*(Dy-dy(i,j))*exp((-(Dx-dx(i,j))^2+(Dy-dy(i,j))^2)/4);
end
end
end
sum_x = ones(1,num);% 1行N列0矩阵
sum_y = ones(1,num);
%fx = zeros(1,num);
for i = 1:num;
for j = 1:num;
if dx(i,j)>0&&dy(i,j)>0;
fx(i) = p*kx(i,j);% j is neighbour to i
fy(i) = p*ky(i,j);
%fx(i) = matlabFunction(fx(i));
%fy(i) = matlabFunction(fy(i));
%P =quad2d(#(Dx,Dy) fx,0,0.01,0,0.01);
%fx =quad(#(Dx) fx,0,0.01);
%fx(i) =quad(#(Dy) fx(i),0,0.01);
%Q =quad2d(#(Dx,Dy) fy,0,0.01,0,0.01);
fx(i) = double(int(int(fx(i),Dx,0,0.01),Dy,0,0.01));
fy(i) = double(int(int(fy(i),Dx,0,0.01),Dy,0,0.01));
%fx(i) = vpa(p*kx(i,j));
%fy(i) = vpa(p*ky(i,j));
%fx(i) = dblquad(#(Dx,Dy)fx(i),0,0.01,0,0.01);
%fy(i) = dblquad(#(Dx,Dy)fy(i),0,0.01,0,0.01);
sum_x(i) = sum_x(i) + fx(i);
sum_y(i) = sum_y(i) + fy(i);
end
end
end
for i = 1:num;
sum_x = 4.*sum_x./num;
sum_y = 4.*sum_y./num;
x(i) = x(i) - 0.05*sum_x(i);
y(i) = y(i) - 0.05*sum_y(i);
end
a = a+1
end
hold on;
scatter(x,y,'.','b')
The fast version of your loop should be something like:
qx = bsxfun(#minus, x_q.', x_q);
qy = bsxfun(#minus, y_q.', y_q);
il = (qx.^2 + qy.^2 >= 0.01);
qx(il) = 0;
qy(il) = 0;
il = qx>0 && qy>0;
q = sum(exp(-((Dx-qx(il)).^2 + (Dy-qy(il)).^2)/4));
%// etc. for vectorization of the inner loops

Error in evaluating a function

EDIT: The code that I have pasted is too long. Basicaly I dont know how to work with the second code, If I know how calculate alpha from the second code I think my problem will be solved. I have tried a lot of input arguments for the second code but it does not work!
I have written following code to solve a convex optimization problem using Gradient descend method:
function [optimumX,optimumF,counter,gNorm,dx] = grad_descent()
x0 = [3 3]';%'//
terminationThreshold = 1e-6;
maxIterations = 100;
dxMin = 1e-6;
gNorm = inf; x = x0; counter = 0; dx = inf;
% ************************************
f = #(x1,x2) 4.*x1.^2 + 2.*x1.*x2 +8.*x2.^2 + 10.*x1 + x2;
%alpha = 0.01;
% ************************************
figure(1); clf; ezcontour(f,[-5 5 -5 5]); axis equal; hold on
f2 = #(x) f(x(1),x(2));
% gradient descent algorithm:
while and(gNorm >= terminationThreshold, and(counter <= maxIterations, dx >= dxMin))
g = grad(x);
gNorm = norm(g);
alpha = linesearch_strongwolfe(f,-g, x0, 1);
xNew = x - alpha * g;
% check step
if ~isfinite(xNew)
display(['Number of iterations: ' num2str(counter)])
error('x is inf or NaN')
end
% **************************************
plot([x(1) xNew(1)],[x(2) xNew(2)],'ko-')
refresh
% **************************************
counter = counter + 1;
dx = norm(xNew-x);
x = xNew;
end
optimumX = x;
optimumF = f2(optimumX);
counter = counter - 1;
% define the gradient of the objective
function g = grad(x)
g = [(8*x(1) + 2*x(2) +10)
(2*x(1) + 16*x(2) + 1)];
end
end
As you can see, I have commented out the alpha = 0.01; part. I want to calculate alpha via an other code. Here is the code (This code is not mine)
function alphas = linesearch_strongwolfe(f,d,x0,alpham)
alpha0 = 0;
alphap = alpha0;
c1 = 1e-4;
c2 = 0.5;
alphax = alpham*rand(1);
[fx0,gx0] = feval(f,x0,d);
fxp = fx0;
gxp = gx0;
i=1;
while (1 ~= 2)
xx = x0 + alphax*d;
[fxx,gxx] = feval(f,xx,d);
if (fxx > fx0 + c1*alphax*gx0) | ((i > 1) & (fxx >= fxp)),
alphas = zoom(f,x0,d,alphap,alphax);
return;
end
if abs(gxx) <= -c2*gx0,
alphas = alphax;
return;
end
if gxx >= 0,
alphas = zoom(f,x0,d,alphax,alphap);
return;
end
alphap = alphax;
fxp = fxx;
gxp = gxx;
alphax = alphax + (alpham-alphax)*rand(1);
i = i+1;
end
function alphas = zoom(f,x0,d,alphal,alphah)
c1 = 1e-4;
c2 = 0.5;
[fx0,gx0] = feval(f,x0,d);
while (1~=2),
alphax = 1/2*(alphal+alphah);
xx = x0 + alphax*d;
[fxx,gxx] = feval(f,xx,d);
xl = x0 + alphal*d;
fxl = feval(f,xl,d);
if ((fxx > fx0 + c1*alphax*gx0) | (fxx >= fxl)),
alphah = alphax;
else
if abs(gxx) <= -c2*gx0,
alphas = alphax;
return;
end
if gxx*(alphah-alphal) >= 0,
alphah = alphal;
end
alphal = alphax;
end
end
But I get this error:
Error in linesearch_strongwolfe (line 11) [fx0,gx0] = feval(f,x0,d);
As you can see I have written the f function and its gradient manually.
linesearch_strongwolfe(f,d,x0,alpham) takes a function f, Gradient of f, a vector x0 and a constant alpham. is there anything wrong with my declaration of f? This code works just fine if I put back alpha = 0.01;
As I see it:
x0 = [3; 3]; %2-element column vector
g = grad(x0); %2-element column vector
f = #(x1,x2) 4.*x1.^2 + 2.*x1.*x2 +8.*x2.^2 + 10.*x1 + x2;
linesearch_strongwolfe(f,-g, x0, 1); %passing variables
inside the function:
[fx0,gx0] = feval(f,x0,-g); %variable names substituted with input vars
This will in effect call
[fx0,gx0] = f(x0,-g);
but f(x0,-g) is a single 2-element column vector with these inputs. Assingning the output to two variables will not work.
You either have to define f as a proper named function (just like grad) to output 2 variables (one for each component), or edit the code of linesearch_strongwolfe to return a single variable, then slice that into 2 separate variables yourself afterwards.
If you experience a very rare kind of laziness and don't want to define a named function, you can still use an anonymous function at the cost of duplicating code for the two components (at least I couldn't come up with a cleaner solution):
f = #(x1,x2) deal(4.*x1(1)^2 + 2.*x1(1)*x2(1) +8.*x2(1)^2 + 10.*x1(1) + x2(1),...
4.*x1(2)^2 + 2.*x1(2)*x2(2) +8.*x2(2)^2 + 10.*x1(2) + x2(2));
[fx0,gx0] = f(x0,-g); %now works fine
as long as you always have 2 output variables. Note that this is more like a proof of concept, since this is ugly, inefficient, and very susceptible to typos.