tangent tangent correlation calculation in matrix form MATLAB - matlab

Consider the following calculation of the tangent tangent correlation which is performed in a for loop
v1=rand(25,1);
v2=rand(25,1);
n=25;
nSteps=10;
mean_theta = zeros(nSteps,1);
for j=1:nSteps
theta=[];
for i=1:(n-j)
d = dot([v1(i) v2(i)],[v1(i+j) v2(i+j)]);
n1 = norm([v1(i) v2(i)]);
n2 = norm([v1(i+j) v2(i+j)]);
theta = [theta acosd(d/n1/n2)];
end
mean_theta(j)=mean(theta);
end
plot(mean_theta)
How can matlab matrix calculations be utilized to make this performance better?

There are several things you can do to speed up your code. First, always preallocate. This converts:
theta = [];
for i = 1:(n-j)
%...
theta = [theta acosd(d/n1/n2)];
end
into:
theta = zeros(1,n-j);
for i = 1:(n-j)
%...
theta(i) = acosd(d/n1/n2);
end
Next, move the normalization out of the loops. There is no need to normalize over and over again, just normalize the input:
v = [v1,v2];
v = v./sqrt(sum(v.^2,2)); % Can use VECNORM in newest MATLAB
%...
theta(i) = acosd(dot(v(i,:),v(i+j,:)));
This does change the output very slightly, within numerical precision, because the different order of operations leads to different floating-point rounding error.
Finally, you can remove the inner loop by vectorizing that computation:
i = 1:(n-j);
theta = acosd(dot(v(i,:),v(i+j,:),2));
Timings (n=25):
Original: 0.0019 s
Preallocate: 0.0013 s
Normalize once: 0.0011 s
Vectorize: 1.4176e-04 s
Timings (n=250):
Original: 0.0185 s
Preallocate: 0.0146 s
Normalize once: 0.0118 s
Vectorize: 2.5694e-04 s
Note how the vectorized code is the only one whose timing doesn't grow linearly with n.
Timing code:
function so
n = 25;
v1 = rand(n,1);
v2 = rand(n,1);
nSteps = 10;
mean_theta1 = method1(v1,v2,nSteps);
mean_theta2 = method2(v1,v2,nSteps);
fprintf('diff method1 vs method2: %g\n',max(abs(mean_theta1(:)-mean_theta2(:))));
mean_theta3 = method3(v1,v2,nSteps);
fprintf('diff method1 vs method3: %g\n',max(abs(mean_theta1(:)-mean_theta3(:))));
mean_theta4 = method4(v1,v2,nSteps);
fprintf('diff method1 vs method4: %g\n',max(abs(mean_theta1(:)-mean_theta4(:))));
timeit(#()method1(v1,v2,nSteps))
timeit(#()method2(v1,v2,nSteps))
timeit(#()method3(v1,v2,nSteps))
timeit(#()method4(v1,v2,nSteps))
function mean_theta = method1(v1,v2,nSteps)
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
theta=[];
for i=1:(n-j)
d = dot([v1(i) v2(i)],[v1(i+j) v2(i+j)]);
n1 = norm([v1(i) v2(i)]);
n2 = norm([v1(i+j) v2(i+j)]);
theta = [theta acosd(d/n1/n2)];
end
mean_theta(j) = mean(theta);
end
function mean_theta = method2(v1,v2,nSteps)
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
theta = zeros(1,n-j);
for i = 1:(n-j)
d = dot([v1(i) v2(i)],[v1(i+j) v2(i+j)]);
n1 = norm([v1(i) v2(i)]);
n2 = norm([v1(i+j) v2(i+j)]);
theta(i) = acosd(d/n1/n2);
end
mean_theta(j) = mean(theta);
end
function mean_theta = method3(v1,v2,nSteps)
v = [v1,v2];
v = v./sqrt(sum(v.^2,2)); % Can use VECNORM in newest MATLAB
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
theta = zeros(1,n-j);
for i = 1:(n-j)
theta(i) = acosd(dot(v(i,:),v(i+j,:)));
end
mean_theta(j) = mean(theta);
end
function mean_theta = method4(v1,v2,nSteps)
v = [v1,v2];
v = v./sqrt(sum(v.^2,2)); % Can use VECNORM in newest MATLAB
n = numel(v1);
mean_theta = zeros(nSteps,1);
for j = 1:nSteps
i = 1:(n-j);
theta = acosd(dot(v(i,:),v(i+j,:),2));
mean_theta(j) = mean(theta);
end

Here is a full vectorized solution:
i = 1:n-1;
j = (1:nSteps).';
ij= min(i+j,n);
a = cat(3, v1(i).', v2(i).');
b = cat(3, v1(ij), v2(ij));
d = sum(a .* b, 3);
n1 = sum(a .^ 2, 3);
n2 = sum(b .^ 2, 3);
theta = acosd(d./sqrt(n1.*n2));
idx = (1:nSteps).' <= (n-1:-1:1);
mean_theta = sum(theta .* idx ,2) ./ sum(idx,2);
Result of Octave timings for my method,method4 from the answer provided by #CrisLuengo and the original method (n=250):
Full vectorized : 0.000864983 seconds
Method4(Vectorize) : 0.002774 seconds
Original(loop) : 0.340693 seconds

Related

CUDA loop on matlab

I have been playing around with parallelization both using ACC and OpenMP in Fortran. I am now trying to do the same in matlab. I find it very interesting that it seems to be very hard to paralelize a loop using GPUs in matlab. Apparently the only way to do it is to by using arrayfun function. But I might be wrong.
At a conceptual level, I am wondering why is the GPU usage in matlab not more straightforward than in fortran. At a more practical level, I am wondering how to use GPUs on the simple code below.
Below, I am sharing three codes and benchmarks:
Fortran OpenMP code
Fortran ACC code
Matlab parfor code
Matlab CUDA (?) this is the one I don't know how to do.
Fortran OpenMP:
program rbc
use omp_lib ! For timing
use tools
implicit none
real, parameter :: beta = 0.984, eta = 2, alpha = 0.35, delta = 0.01, &
rho = 0.95, sigma = 0.005, zmin=-0.0480384, zmax=0.0480384;
integer, parameter :: nz = 4, nk=4800;
real :: zgrid(nz), kgrid(nk), t_tran_z(nz,nz), tran_z(nz,nz);
real :: kmax, kmin, tol, dif, c(nk), r(nk), w(nk);
real, dimension(nk,nz) :: v=0., v0=0., ev=0., c0=0.;
integer :: i, iz, ik, cnt;
logical :: ind(nk);
real(kind=8) :: start, finish ! For timing
real :: tmpmax, c1
call omp_set_num_threads(12)
!Grid for productivity z
! [1 x 4] grid of values for z
call linspace(zmin,zmax,nz,zgrid)
zgrid = exp(zgrid)
! [4 x 4] Markov transition matrix of z
tran_z(1,1) = 0.996757
tran_z(1,2) = 0.00324265
tran_z(1,3) = 0
tran_z(1,4) = 0
tran_z(2,1) = 0.000385933
tran_z(2,2) = 0.998441
tran_z(2,3) = 0.00117336
tran_z(2,4) = 0
tran_z(3,1) = 0
tran_z(3,2) = 0.00117336
tran_z(3,3) = 0.998441
tran_z(3,4) = 0.000385933
tran_z(4,1) = 0
tran_z(4,2) = 0
tran_z(4,3) = 0.00324265
tran_z(4,4) = 0.996757
! Grid for capital k
kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)**(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)**(1/(alpha-1));
! [1 x 4800] grid of possible values of k
call linspace(kmin, kmax, nk, kgrid)
! Compute initial wealth c0(k,z)
do iz=1,nz
c0(:,iz) = zgrid(iz)*kgrid**alpha + (1-delta)*kgrid;
end do
dif = 10000
tol = 1e-8
cnt = 1
do while(dif>tol)
!$omp parallel do default(shared) private(ik,iz,i,tmpmax,c1)
do ik=1,nk;
do iz = 1,nz;
tmpmax = -huge(0.)
do i = 1,nk
c1 = c0(ik,iz) - kgrid(i)
if(c1<0) exit
c1 = c1**(1-eta)/(1-eta)+ev(i,iz)
if(tmpmax<c1) tmpmax = c1
end do
v(ik,iz) = tmpmax
end do
end do
!$omp end parallel do
ev = beta*matmul(v,tran_z)
dif = maxval(abs(v-v0))
v0 = v
if(mod(cnt,1)==0) write(*,*) cnt, ':', dif
cnt = cnt+1
end do
end program
Fortran ACC:
Just replace the mainloop syntax on the above code with:
do while(dif>tol)
!$acc kernels
!$acc loop gang
do ik=1,nk;
!$acc loop gang
do iz = 1,nz;
tmpmax = -huge(0.)
do i = 1,nk
c1 = c0(ik,iz) - kgrid(i)
if(c1<0) exit
c1 = c1**(1-eta)/(1-eta)+ev(i,iz)
if(tmpmax<c1) tmpmax = c1
end do
v(ik,iz) = tmpmax
end do
end do
!$acc end kernels
ev = beta*matmul(v,tran_z)
dif = maxval(abs(v-v0))
v0 = v
if(mod(cnt,1)==0) write(*,*) cnt, ':', dif
cnt = cnt+1
end do
Matlab parfor:
(I know the code below could be made faster by using vectorized syntax, but the whole point of the exercise is to compare loop speeds).
tic;
beta = 0.984;
eta = 2;
alpha = 0.35;
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;
v=zeros(nk,nz);
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);
%Grid for productivity z
%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;
% Grid for capital k
kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));
% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);
% Compute initial wealth c0(k,z)
for iz=1:nz
c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end
dif = 10000;
tol = 1e-8;
cnt = 1;
while dif>tol
parfor ik=1:nk
for iz = 1:nz
tmpmax = -intmax;
for i = 1:nk
c1 = c0(ik,iz) - kgrid(i);
if (c1<0)
continue
end
c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
if tmpmax<c1
tmpmax = c1;
end
end
v(ik,iz) = tmpmax;
end
end
ev = beta*v*tran_z;
dif = max(max(abs(v-v0)));
v0 = v;
if mod(cnt,1)==0
fprintf('%1.5f : %1.5f \n', [cnt dif])
end
cnt = cnt+1;
end
toc
Matlab CUDA:
This is what I have no clue how to code. Is using arrayfun the only way of doing this? In fortran is so simple to move from OpenMP to OpenACC. Isn't there an easy way in Matlab of going from parfor to GPUs loops?
The time comparison between codes:
Fortran OpenMP: 83.1 seconds
Fortran ACC: 2.4 seconds
Matlab parfor: 1182 seconds
Final remark, I should say the codes above solve a simple Real Business Cycle Model and were written based on this.
Matlab Coder
First, as Dev-iL already mentioned, you can use GPU coder.
It (I use R2019a) would only require minor changes in your code:
function cdapted()
beta = 0.984;
eta = 2;
alpha = 0.35;
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;
v=zeros(nk,nz);
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);
%Grid for productivity z
%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;
% Grid for capital k
kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));
% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);
% Compute initial wealth c0(k,z)
for iz=1:nz
c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end
dif = 10000;
tol = 1e-8;
cnt = 1;
while dif>tol
for ik=1:nk
for iz = 1:nz
tmpmax = double(intmin);
for i = 1:nk
c1 = c0(ik,iz) - kgrid(i);
if (c1<0)
continue
end
c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
if tmpmax<c1
tmpmax = c1;
end
end
v(ik,iz) = tmpmax;
end
end
ev = beta*v*tran_z;
dif = max(max(abs(v-v0)));
v0 = v;
% I've commented out fprintf because double2single cannot handle it
% (could be manually uncommented in the converted version if needed)
% ------------
% if mod(cnt,1)==0
% fprintf('%1.5f : %1.5f \n', cnt, dif);
% end
cnt = cnt+1;
end
end
The script to build this is:
% unload mex files
clear mex
%% Build for gpu, float64
% Produces ".\codegen\mex\cdapted" folder and "cdapted_mex.mexw64"
cfg = coder.gpuConfig('mex');
codegen -config cfg cdapted
% benchmark it (~7.14s on my GTX1080Ti)
timeit(#() cdapted_mex,0)
%% Build for gpu, float32:
% Produces ".\codegen\cdapted\single" folder
scfg = coder.config('single');
codegen -double2single scfg cdapted
% Produces ".\codegen\mex\cdapted_single" folder and "cdapted_single_mex.mexw64"
cfg = coder.gpuConfig('mex');
codegen -config cfg .\codegen\cdapted\single\cdapted_single.m
% benchmark it (~2.09s on my GTX1080Ti)
timeit(#() cdapted_single_mex,0)
So, if your Fortran binary is using float32 precision (I suspect so), this Matlab Coder result is on par with it. That does not mean that both are highly efficient, though. The code, generated by Matlab Coder is still far from being efficient. And it does not fully utilize the GPU (even TDP is ~50%).
Vectorization and gpuArray
Next, I agree with user10597469 and Nicky Mattsson that your Matlab code does not look like normal "native" vectorized Matlab code.
There are many things to adjust. (But arrayfun is hardly better than for). Firstly, let's remove for loops:
function vertorized1()
t_tot = tic();
beta = 0.984;
eta = 2;
alpha = 0.35;
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;
v=zeros(nk,nz);
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);
%Grid for productivity z
%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;
% Grid for capital k
kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));
% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);
% Compute initial wealth c0(k,z)
for iz=1:nz
c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end
dif = 10000;
tol = 0.4;
tol = 1e-8;
cnt = 1;
t_acc=zeros([1,2]);
while dif>tol
%% orig-noparfor:
t=tic();
for ik=1:nk
for iz = 1:nz
tmpmax = -intmax;
for i = 1:nk
c1 = c0(ik,iz) - kgrid(i);
if (c1<0)
continue
end
c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
if tmpmax<c1
tmpmax = c1;
end
end
v(ik,iz) = tmpmax;
end
end
t_acc(1) = t_acc(1) + toc(t);
%% better:
t=tic();
kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
c1_ = c0 - kgrid_;
c1_x = c1_.^(1-eta)/(1-eta);
c2 = c1_x + reshape(ev', [1 nz nk]);
c2(c1_<0) = -Inf;
v_ = max(c2,[],3);
t_acc(2) = t_acc(2) + toc(t);
%% compare
assert(isequal(v_,v));
v=v_;
%% other
ev = beta*v*tran_z;
dif = max(max(abs(v-v0)));
v0 = v;
if mod(cnt,1)==0
fprintf('%1.5f : %1.5f \n', cnt, dif);
end
cnt = cnt+1;
end
disp(t_acc);
disp(toc(t_tot));
end
% toc result:
% tol = 0.4 -> 12 iterations :: t_acc = [ 17.7 9.8]
% tol = 1e-8 -> 1124 iterations :: t_acc = [1758.6 972.0]
%
% (all 1124 iterations) with commented-out orig :: t_tot = 931.7443
Now, it is strikingly evident that most computationally intense calculations inside the while loop (e.g. ^(1-eta)/(1-eta)) actually produce constants that could be pre-calculated. Once we fix that, the result would be already a bit faster than the original parfor-based version (on my 2xE5-2630v3):
function vertorized2()
t_tot = tic();
beta = 0.984;
eta = 2;
alpha = 0.35;
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;
v=zeros(nk,nz);
v0=zeros(nk,nz);
ev=zeros(nk,nz);
c0=zeros(nk,nz);
%Grid for productivity z
%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;
% Grid for capital k
kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));
% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);
% Compute initial wealth c0(k,z)
for iz=1:nz
c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end
dif = 10000;
tol = 0.4;
tol = 1e-8;
cnt = 1;
t_acc=zeros([1,2]);
%% constants:
kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
c1_ = c0 - kgrid_;
mask=zeros(size(c1_));
mask(c1_<0)=-Inf;
c1_x = c1_.^(1-eta)/(1-eta);
while dif>tol
%% orig:
t=tic();
parfor ik=1:nk
for iz = 1:nz
tmpmax = -intmax;
for i = 1:nk
c1 = c0(ik,iz) - kgrid(i);
if (c1<0)
continue
end
c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
if tmpmax<c1
tmpmax = c1;
end
end
v(ik,iz) = tmpmax;
end
end
t_acc(1) = t_acc(1) + toc(t);
%% better:
t=tic();
c2 = c1_x + reshape(ev', [1 nz nk]);
c2 = c2 + mask;
v_ = max(c2,[],3);
t_acc(2) = t_acc(2) + toc(t);
%% compare
assert(isequal(v_,v));
v=v_;
%% other
ev = beta*v*tran_z;
dif = max(max(abs(v-v0)));
v0 = v;
if mod(cnt,1)==0
fprintf('%1.5f : %1.5f \n', cnt, dif);
end
cnt = cnt+1;
end
disp(t_acc);
disp(toc(t_tot));
end
% toc result:
% tol = 0.4 -> 12 iterations :: t_acc = [ 2.4 1.7]
% tol = 1e-8 -> 1124 iterations :: t_acc = [188.3 115.9]
%
% (all 1124 iterations) with commented-out orig :: t_tot = 117.6217
This vectorized code is still inefficient (e.g. reshape(ev',...), which eats ~60% of time, could be easily avoided by re-ordering dimensions), but it is somewhat suitable for gpuArray():
function vectorized3g()
t0 = tic();
beta = 0.984;
eta = 2;
alpha = 0.35;
delta = 0.01;
rho = 0.95;
sigma = 0.005;
zmin=-0.0480384;
zmax=0.0480384;
nz = 4;
nk=4800;
v=zeros(nk,nz);
v0=zeros(nk,nz);
ev=gpuArray(zeros(nk,nz,'single'));
c0=zeros(nk,nz);
%Grid for productivity z
%[1 x 4] grid of values for z
zgrid = linspace(zmin,zmax,nz);
zgrid = exp(zgrid);
% [4 x 4] Markov transition matrix of z
tran_z = zeros([4,4]);
tran_z(1,1) = 0.996757;
tran_z(1,2) = 0.00324265;
tran_z(1,3) = 0;
tran_z(1,4) = 0;
tran_z(2,1) = 0.000385933;
tran_z(2,2) = 0.998441;
tran_z(2,3) = 0.00117336;
tran_z(2,4) = 0;
tran_z(3,1) = 0;
tran_z(3,2) = 0.00117336;
tran_z(3,3) = 0.998441;
tran_z(3,4) = 0.000385933;
tran_z(4,1) = 0;
tran_z(4,2) = 0;
tran_z(4,3) = 0.00324265;
tran_z(4,4) = 0.996757;
% Grid for capital k
kmin = 0.95*(1/(alpha*zgrid(1)))*((1/beta)-1+delta)^(1/(alpha-1));
kmax = 1.05*(1/(alpha*zgrid(nz)))*((1/beta)-1+delta)^(1/(alpha-1));
% [1 x 4800] grid of possible values of k
kgrid = linspace(kmin, kmax, nk);
% Compute initial wealth c0(k,z)
for iz=1:nz
c0(:,iz) = zgrid(iz)*kgrid.^alpha + (1-delta)*kgrid;
end
dif = 10000;
tol = 1e-8;
cnt = 1;
t_acc=zeros([1,2]);
%% constants:
kgrid_ = reshape(kgrid,[1 1 numel(kgrid)]);
c1_ = c0 - kgrid_;
mask=gpuArray(zeros(size(c1_),'single'));
mask(c1_<0)=-Inf;
c1_x = c1_.^(1-eta)/(1-eta);
c1_x = gpuArray(single(c1_x));
while dif>tol
%% orig:
% t=tic();
% parfor ik=1:nk
% for iz = 1:nz
% tmpmax = -intmax;
%
% for i = 1:nk
% c1 = c0(ik,iz) - kgrid(i);
% if (c1<0)
% continue
% end
% c1 = c1^(1-eta)/(1-eta)+ev(i,iz);
% if tmpmax<c1
% tmpmax = c1;
% end
% end
% v(ik,iz) = tmpmax;
% end
%
% end
% t_acc(1) = t_acc(1) + toc(t);
%% better:
t=tic();
c2 = c1_x + reshape(ev', [1 nz nk]);
c2 = c2 + mask;
v_ = max(c2,[],3);
t_acc(2) = t_acc(2) + toc(t);
%% compare
% assert(isequal(v_,v));
v = v_;
%% other
ev = beta*v*tran_z;
dif = max(max(abs(v-v0)));
v0 = v;
if mod(cnt,1)==0
fprintf('%1.5f : %1.5f \n', cnt, dif);
end
cnt = cnt+1;
end
disp(t_acc);
disp(toc(t0));
end
% (all 849 iterations) with commented-out orig :: t_tot = 14.9040
This ~15 sec result is ~7x worse than those (~2sec) we get from Matlab Coder. But this option requires fewer toolboxes. In practice, gpuArray is most handy when you start from writing "native Matlab code". Including interactive use.
Finally, if you build this final vectorized version with Matlab Coder (you would have to do some trivial adjustments), it won't be faster than the first one. It would be 2x-3x slower.
So, this bit is what is going to mess you up on this project. MATLAB stands for Matrix Laboratory. Vectors and matrices are kind of its thing. The number 1 way to optimize anything in MATLAB is to vectorize it. For this reason, when using performance enhancing tools like CUDA, MATLAB assumes that you are going to vectorize your inputs if possible. Given the primacy of vectorizing inputs in the MATLAB coding style, it is not a fair comparison to assess its performance using only loops. It would be like assessing the performance of C++ while refusing to use pointers. If you want to use CUDA with MATLAB, the main way to go about it is to vectorize your inputs and use gpuarray. Honestly, I haven't looked too hard at your code but it kind of looks like your inputs are already mostly vectorized. You may be able to get away with something as simple as gpuarray(1:nk) or kgrid=gpuarray(linspace(...).

Finding the distribution of '1' in rows/columns from a parity-check matrix

My question this time concerns the obtention of the degree distribution of a LDPC matrix through linear programming, under the following statement:
My code is the following:
function [v] = LP_Irr_LDPC(k,Ebn0)
options = optimoptions('fmincon','Display','iter','Algorithm','interior-point','MaxIter', 4000, 'MaxFunEvals', 70000);
fun = #(v) -sum(v(1:k)./(1:k));
A = [];
b = [];
Aeq = [0, ones(1,k-1)];
beq = 1;
lb = zeros(1,k);
ub = [0, ones(1,k-1)];
nonlcon = #(v)DensEv_SP(v,Ebn0);
l0 = [0 rand(1,k-1)];
l0 = l0./sum(l0);
v = fmincon(fun,l0,A,b,Aeq,beq,lb,ub,nonlcon,options)
end
Definition of nonlinear constraints:
function [c, ceq] = DensEv_SP(v,Ebn0)
% It is also needed to modify this function, as you cannot pass parameters from others to it.
h = [0 rand(1,19)];
h = h./sum(h); % This is where h comes from
syms x;
X = x.^(0:(length(h)-1));
R = h*transpose(X);
ebn0 = 10^(Ebn0/10);
Rm = 1;
LLR = (-50:50);
p03 = 0.3;
LLR03 = log((1-p03)/p03);
r03 = 1 - p03;
noise03 = (2*r03*Rm*ebn0)^-1;
pf03 = normpdf(LLR, LLR03, noise03);
sumpf03 = sum(pf03(1:length(pf03)/2));
divisions = 100;
Aj = zeros(1, divisions);
rho = zeros(1, divisions);
xj = zeros(1, divisions);
k = 10; % Length(v) -> Same value as in 'Complete.m'
for j=1:1:divisions
xj(j) = sumpf03*j/divisions;
rho(j) = subs(R,x,1-xj(j));
Aj(j) = 1 - rho(j);
end
c = zeros(1, length(xj));
lambda = zeros(1, length(Aj));
for j = 1:1:length(xj)
lambda(j) = sum(v(2:k).*(Aj(j).^(1:(k-1))));
c(j) = sumpf03*lambda(j) - xj(j);
end
save Almacen
ceq = [];
%ceq = sum(v)-1;
end
This question is linked to the one posted here. My problem is that I need that each element from vectors v and h resulting from this optimization problem is a fraction of x/N and x/(N(1-r) respectively.
How could I ensure that condition without losing convergence capability?
I came up with one possible solution, at least for vector h, within the function DensEv_SP:
function [c, ceq] = DensEv_SP(v,Ebn0)
% It is also needed to modify this function, as you cannot pass parameters from others to it.
k = 10; % Same as in Complete.m, desired sum of h
M = 19; % Number of integers
h = [0 diff([0,sort(randperm(k+M-1,M-1)),k+M])-ones(1,M)];
h = h./sum(h);
syms x;
X = x.^(0:(length(h)-1));
R = h*transpose(X);
ebn0 = 10^(Ebn0/10);
Rm = 1;
LLR = (-50:50);
p03 = 0.3;
LLR03 = log((1-p03)/p03);
r03 = 1 - p03;
noise03 = (2*r03*Rm*ebn0)^-1;
pf03 = normpdf(LLR, LLR03, noise03);
sumpf03 = sum(pf03(1:length(pf03)/2));
divisions = 100;
Aj = zeros(1, divisions);
rho = zeros(1, divisions);
xj = zeros(1, divisions);
N = 20; % Length(v) -> Same value as in 'Complete.m'
for j=1:1:divisions
xj(j) = sumpf03*j/divisions;
rho(j) = subs(R,x,1-xj(j));
Aj(j) = 1 - rho(j);
end
c = zeros(1, length(xj));
lambda = zeros(1, length(Aj));
for j = 1:1:length(xj)
lambda(j) = sum(v(2:k).*(Aj(j).^(1:(k-1))));
c(j) = sumpf03*lambda(j) - xj(j);
end
save Almacen
ceq = (N*v)-floor(N*v);
%ceq = sum(v)-1;
end
As above stated, there is no longer any problem with vector h; nevertheless the way I defined ceq value seemed to be insufficient to make the optimization work out (the problems with v have not diminished at all). Does anybody know how to find the solution?

Matlab: how to calculate the Pseudo Zernike moments?

The code below is defined as algorithm 1 that computes the Pseudo Zernike Radial polynomials:
function R = pseudo_zernike_radial_polynomials(n,r)
if any( r>1 | r<0 | n<0)
error(':zernike_radial_polynomials either r is less than or greater thatn 1, r must be between 0 and 1 or n is less than 0.')
end
if n==0;
R =ones(n +1, length(r));
return;
end
R =ones(n +1, length(r));
rSQRT= sqrt(r);
r0 = ~logical(rSQRT.^(2*n+1)) ; % if any low r exist, and high n, then treat as 0
if any(r0)
m = n:-1:mod(n,2); ss=1:sum(r0);
R0(m +1, ss)=0;
R0(0 +1, ss)=1;
R(:,r0)=R0;
end
if any(~r0)
rSQRT= rSQRT(~r0);
R1 = zernike_radial_polynomials(2*n+1, rSQRT );
m = 2:2: 2*n+1 +1;
R1=R1(m,:);
for m=1:size(R1,1)
R1(m,:) = R1(m,:)./rSQRT';
end
R(:,~r0)=R1;
end
Then, this is algorithm 2 that calculates the moments:
and I translate into the code as follow:
clear all
%input : 2D image f, Nmax = order.
f = rgb2gray(imread('Oval_45.png'));
prompt = ('Input PZM order Nmax:');
Nmax = input(prompt);
Pzm =0;
l = size(f,1);
for x = 1:l;
for y =x;
for n = 0:Nmax;
[X,Y] = meshgrid(x,y);
R = sqrt((2.*X-l-1).^2+(2.*Y-l-1).^2)/l;
theta = atan2((l-1-2.*Y+2),(2.*X-l+1-2));
R = (R<=1).*R;
rad = pseudo_zernike_radial_polynomials(n, R);
for m = 0:n;
%find psi
if mod(m,2)==0
%m is even
newd1 = f(x,y)+f(x,y);
newd2 = f(y,x)+f(y,x);
newd3 = f(x,y)+f(x,y);
newd4 = f(y,x)+f(y,x);
x1 = newd1;
y1 = (-1)^m/2*newd2;
x2 = newd3;
y2 = (-1)^m/2*newd4;
psi = cos(m*theta)*(x1+y1+x2+y2)-(1i)*sin(m*theta)*(x1+y1-x2-y2);
else
newd1 = f(x,y)-f(x,y);
newd2 = f(y,x)-f(y,x);
newd3 = f(x,y)-f(x,y);
newd4 = f(y,x)-f(y,x);
x1 = newd1;
y1 = (-1)^m/2*newd2;
x2 = newd3;
y2 = (-1)^m/2*newd4;
psi = cos(m*theta)*(x1+x2)+sin(m*theta)*(y1-y2)+(1i)*(cos(m*theta)*(y1+y2)-sin(m*theta)*(x1-x2));
end
Pzm = Pzm+rad*psi;
end
end
end
end
However its give me error :
Error using *
Integers can only be combined with integers of the same class, or scalar doubles.
Error in main_pzm (line 44)
Pzm = Pzm+rad*psi;
The detail of the calculation can be seen here

cant find my code error in LU decomposition [duplicate]

I am trying to implement my own LU decomposition with partial pivoting. My code is below and apparently is working fine, but for some matrices it gives different results when comparing with the built-in [L, U, P] = lu(A) function in matlab
Can anyone spot where is it wrong?
function [L, U, P] = lu_decomposition_pivot(A)
n = size(A,1);
Ak = A;
L = zeros(n);
U = zeros(n);
P = eye(n);
for k = 1:n-1
for i = k+1:n
[~,r] = max(abs(Ak(:,k)));
Ak([k r],:) = Ak([r k],:);
P([k r],:) = P([r k],:);
L(i,k) = Ak(i,k) / Ak(k,k);
for j = k+1:n
U(k,j-1) = Ak(k,j-1);
Ak(i,j) = Ak(i,j) - L(i,k)*Ak(k,j);
end
end
end
L(1:n+1:end) = 1;
U(:,end) = Ak(:,end);
return
Here are the two matrices I've tested with. The first one is correct, whereas the second has some elements inverted.
A = [1 2 0; 2 4 8; 3 -1 2];
A = [0.8443 0.1707 0.3111;
0.1948 0.2277 0.9234;
0.2259 0.4357 0.4302];
UPDATE
I have checked my code and corrected some bugs, but still there's something missing with the partial pivoting. In the first column the last two rows are always inverted (compared with the result of lu() in matlab)
function [L, U, P] = lu_decomposition_pivot(A)
n = size(A,1);
Ak = A;
L = eye(n);
U = zeros(n);
P = eye(n);
for k = 1:n-1
[~,r] = max(abs(Ak(k:end,k)));
r = n-(n-k+1)+r;
Ak([k r],:) = Ak([r k],:);
P([k r],:) = P([r k],:);
for i = k+1:n
L(i,k) = Ak(i,k) / Ak(k,k);
for j = 1:n
U(k,j) = Ak(k,j);
Ak(i,j) = Ak(i,j) - L(i,k)*Ak(k,j);
end
end
end
U(:,end) = Ak(:,end);
return
I forgot that If there was a swap in matrix P I had to swap also the matrix L. So just add the next line after after swapping P and everything will work excellent.
L([k r],:) = L([r k],:);
Both functions are not correct.
Here is the correct one.
function [L, U, P] = LU_pivot(A)
[m, n] = size(A); L=eye(n); P=eye(n); U=A;
for k=1:m-1
pivot=max(abs(U(k:m,k)))
for j=k:m
if(abs(U(j,k))==pivot)
ind=j
break;
end
end
U([k,ind],k:m)=U([ind,k],k:m)
L([k,ind],1:k-1)=L([ind,k],1:k-1)
P([k,ind],:)=P([ind,k],:)
for j=k+1:m
L(j,k)=U(j,k)/U(k,k)
U(j,k:m)=U(j,k:m)-L(j,k)*U(k,k:m)
end
pause;
end
end
My answer is here:
function [L, U, P] = LU_pivot(A)
[n, n1] = size(A); L=eye(n); P=eye(n); U=A;
for j = 1:n
[pivot m] = max(abs(U(j:n, j)));
m = m+j-1;
if m ~= j
U([m,j],:) = U([j,m], :); % interchange rows m and j in U
P([m,j],:) = P([j,m], :); % interchange rows m and j in P
if j >= 2; % very_important_point
L([m,j],1:j-1) = L([j,m], 1:j-1); % interchange rows m and j in columns 1:j-1 of L
end;
end
for i = j+1:n
L(i, j) = U(i, j) / U(j, j);
U(i, :) = U(i, :) - L(i, j)*U(j, :);
end
end

Recomposing vector input to algorithm from matrix output

I've written some code to implement an algorithm that takes as input a vector q of real numbers, and returns as an output a complex matrix R. The Matlab code below produces a plot showing the input vector q and the output matrix R.
Given only the complex matrix output R, I would like to obtain the input vector q. Can I do this using least-squares optimization? Since there is a recursive running sum in the code (rs_r and rs_i), the calculation for a column of the output matrix is dependent on the calculation of the previous column.
Perhaps a non-linear optimization can be set up to recompose the input vector q from the output matrix R?
Looking at this in another way, I've used an algorithm to compute a matrix R. I want to run the algorithm "in reverse," to get the input vector q from the output matrix R.
If there is no way to recompose the starting values from the output, thereby treating the problem as a "black box," then perhaps the mathematics of the model itself can be used in the optimization? The program evaluates the following equation:
The Utilde(tau, omega) is the output matrix R. The tau (time) variable comprises the columns of the response matrix R, whereas the omega (frequency) variable comprises the rows of the response matrix R. The integration is performed as a recursive running sum from tau = 0 up to the current tau timestep.
Here are the plots created by the program posted below:
Here is the full program code:
N = 1001;
q = zeros(N, 1); % here is the input
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv); % R is output matrix
rows = wSize;
cols = N;
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure; imagesc(abs(R)); title('Matrix output of algorithm')
colorbar
Here is the function that performs the calculation:
function response = get_response(N, Q, dt, wSize, Glim, ginv)
fs = 1 / dt;
Npad = wSize - 1;
N1 = wSize + Npad;
N2 = floor(N1 / 2 + 1);
f = (fs/2)*linspace(0,1,N2);
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
sigma2 = exp(-(0.23*Glim + 1.63));
sign = 1;
if(ginv == 1)
sign = -1;
end
ratio = omega ./ omegah;
rs_r = zeros(N2, 1);
rs_i = zeros(N2, 1);
termr = zeros(N2, 1);
termi = zeros(N2, 1);
termr_sub1 = zeros(N2, 1);
termi_sub1 = zeros(N2, 1);
response = zeros(N2, N);
% cycle over cols of matrix
for ti = 1:N
term0 = omega ./ (2 .* Q(ti));
gamma = 1 / (pi * Q(ti));
% calculate for the real part
if(ti == 1)
Lambda = ones(N2, 1);
termr_sub1(1) = 0;
termr_sub1(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
else
termr(1) = 0;
termr(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
rs_r = rs_r - dt.*(termr + termr_sub1);
termr_sub1 = termr;
Beta = exp( -1 .* -0.5 .* rs_r );
Lambda = (Beta + sigma2) ./ (Beta.^2 + sigma2); % vector
end
% calculate for the complex part
if(ginv == 1)
termi(1) = 0;
termi(2:end) = (ratio(2:end).^(sign .* gamma) - 1) .* omega(2:end);
else
termi = (ratio.^(sign .* gamma) - 1) .* omega;
end
rs_i = rs_i - dt.*(termi + termi_sub1);
termi_sub1 = termi;
integrand = exp( 1i .* -0.5 .* rs_i );
if(ginv == 1)
response(:,ti) = Lambda .* integrand;
else
response(:,ti) = (1 ./ Lambda) .* integrand;
end
end % ti loop
No, you cannot do so unless you know the "model" itself for this process. If you intend to treat the process as a complete black box, then it is impossible in general, although in any specific instance, anything can happen.
Even if you DO know the underlying process, then it may still not work, as any least squares estimator is dependent on the starting values, so if you do not have a good guess there, it may converge to the wrong set of parameters.
It turns out that by using the mathematics of the model, the input can be estimated. This is not true in general, but for my problem it seems to work. The cumulative integral is eliminated by a partial derivative.
N = 1001;
q = zeros(N, 1);
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv);
rows = wSize;
cols = N;
cut_val = 200;
imagLogR = imag(log(R));
Mderiv = zeros(rows, cols-2);
for k = 1:rows
val = deriv_3pt(imagLogR(k,:), dt);
val(val > cut_val) = 0;
Mderiv(k,:) = val(1:end-1);
end
disp('Running iteration');
q0 = 10;
q1 = 500;
NN = cols - 2;
qout = zeros(NN, 1);
for k = 1:NN
data = Mderiv(:,k);
qout(k) = fminbnd(#(q) curve_fit_to_get_q(q, dt, rows, data),q0,q1);
end
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure;
plot(qout); title('Reconstructed q')
ylim([0 200]); xlim([0 1001])
Here are the supporting functions:
function output = deriv_3pt(x, dt)
% Function to compute dx/dt using the 3pt symmetrical rule
% dt is the timestep
N = length(x);
N0 = N - 1;
output = zeros(N0, 1);
denom = 2 * dt;
for k = 2:N0
output(k - 1) = (x(k+1) - x(k-1)) / denom;
end
function sse = curve_fit_to_get_q(q, dt, rows, data)
fs = 1 / dt;
N2 = rows;
f = (fs/2)*linspace(0,1,N2); % vector for frequency along cols
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
ratio = omega ./ omegah;
gamma = 1 / (pi * q);
termi = ((ratio.^(gamma)) - 1) .* omega;
Error_Vector = termi - data;
sse = sum(Error_Vector.^2);