MATLAB to Scilab conversion: mfile2sci error "File contains no instruction" - matlab

I am very new to Scilab, but so far have not been able to find an answer (either here or via google) to my question. I'm sure it's a simple solution, but I'm at a loss. I have a lot of MATLAB scripts I wrote in grad school, but now that I'm out of school, I no longer have access to MATLAB (and can't justify the cost). Scilab looked like the best open alternative. I'm trying to convert my .m files to Scilab compatible versions using mfile2sci, but when running the mfile2sci GUI, I get the error/message shown below. Attached is the original code from the M-file, in case it's relevant.
I Searched Stack Overflow and companion sites, Google, Scilab documentation.
The M-file code follows (it's a super basic MATLAB script as part of an old homework question -- I chose it as it's the shortest, most straightforward M-file I had):
Mmax = 15;
N = 20;
T = 2000;
%define upper limit for sparsity of signal
smax = 15;
mNE = zeros(smax,Mmax);
mESR= zeros(smax,Mmax);
for M = 1:Mmax
aNormErr = zeros(smax,1);
aSz = zeros(smax,1);
ESR = zeros(smax,1);
for s=1:smax % for-loop to loop script smax times
normErr = zeros(1,T);
vESR = zeros(1,T);
sz = zeros(1,T);
for t=1:T %for-loop to carry out 2000 trials per s-value
esr = 0;
A = randn(M,N); % generate random MxN matrix
[M,N] = size(A);
An = zeros(M,N); % initialize normalized matrix
for h = 1:size(A,2) % normalize columns of matrix A
V = A(:,h)/norm(A(:,h));
An(:,h) = V;
end
A = An; % replace A with its column-normalized counterpart
c = randperm(N,s); % create random support vector with s entries
x = zeros(N,1); % initialize vector x
for i = 1:size(c,2)
val = (10-1)*rand + 1;% generate interval [1,10]
neg = mod(randi(10),2); % include [-10,-1]
if neg~=0
val = -1*val;
end
x(c(i)) = val; %replace c(i)th value of x with the nonzero value
end
y = A*x; % generate measurement vector (y)
R = y;
S = []; % initialize array to store selected columns of A
indx = []; % vector to store indices of selected columns
coeff = zeros(1,s); % vector to store coefficients of approx.
stop = 10; % init. stop condition
in = 0; % index variable
esr = 0;
xhat = zeros(N,1); % intialize estimated x signal
while (stop>0.5 && size(S,2)<smax)
%MAX = abs(A(:,1)'*R);
maxV = zeros(1,N);
for i = 1:size(A,2)
maxV(i) = abs(A(:,i)'*R);
end
in = find(maxV == max(maxV));
indx = [indx in];
S = [S A(:,in)];
coeff = [coeff R'*S(:,size(S,2))]; % update coefficient vector
for w=1:size(S,2)
r = y - ((R'*S(:,w))*S(:,w)); % update residuals
if norm(r)<norm(R)
index = w;
end
R = r;
stop = norm(R); % update stop condition
end
for j=1:size(S,2) % place coefficients into xhat at correct indices
xhat(indx(j))=coeff(j);
end
nE = norm(x-xhat)/norm(x); % calculate normalized error for this estimate
%esr = 0;
indx = sort(indx);
c = sort(c);
if isequal(indx,c)
esr = esr+1;
end
end
vESR(t) = esr;
sz(t) = size(S,2);
normErr(t) = nE;
end
%avsz = sum(sz)/T;
aSz(s) = sum(sz)/T;
%aESR = sum(vESR)/T;
ESR(s) = sum(vESR)/T;
%avnormErr = sum(normErr)/T; % produce average normalized error for these run
aNormErr(s) = sum(normErr)/T; % add new avnormErr to vector of all av norm errors
end
% just put this here to view the vector
mNE(:,M) = aNormErr;
mESR(:,M) = ESR;
% had an 'end' placed here, might've been unmatched
mNE%reshape(mNE,[],Mmax)
mESR%reshape(mESR,[],Mmax)]
figure
dimx = [1 Mmax];
dimy = [1 smax];
imagesc(dimx,dimy,mESR)
colormap gray
strESR = sprintf('Average ESR, N=%d',N);
title(strESR);
xlabel('M');
ylabel('s');
strNE = sprintf('Average Normed Error, N=%d',N);
figure
imagesc(dimx,dimy,mNE)
colormap gray
title(strNE)
xlabel('M');
ylabel('s');
The command used (and results) follow:
--> mfile2sci
ans =
[]
****** Beginning of mfile2sci() session ******
File to convert: C:/Users/User/Downloads/WTF_new.m
Result file path: C:/Users/User/DOWNLO~1/
Recursive mode: OFF
Only double values used in M-file: NO
Verbose mode: 3
Generate formatted code: NO
M-file reading...
M-file reading: Done
Syntax modification...
Syntax modification: Done
File contains no instruction, no translation made...
****** End of mfile2sci() session ******

To convert the foo.m file one has to enter
mfile2sci <path>/foo.m
where stands for the path of the directoty where foo.m is. The result is written in /foo.sci

Remove the ```` at the begining of each line, the conversion will proceed normally ?. However, don't expect to obtain a working .sci file as the m2sci converter is (to me) still an experimental tool !

Related

cell2mat not supported for C code generation in MATLAB Coder

I am trying to convert a neural network function written in MATLAB to C function by using MATLAB Codel APP. But when I try to convert, I am getting as
cell2mat is not supported for code generation arrayfun is not supported for code generation
How can I replace these functions?
My code is as given below. Input X is a 1x2500 matrix and output is a 1x6 matrix.
Full code is aavailable in this link boneGrwNN
function Y = boneGrwNN(X)
x1_step1.ymin = -1;
% ===== SIMULATION ========
% Format Input Arguments
isCellX = iscell(X);
if ~isCellX, X = {X}; end;
% Dimensions
TS = size(X,2); % timesteps
if ~isempty(X)
Q = size(X{1},1); % samples/series
else
Q = 0;
end
% Allocate Outputs
Y = cell(1,TS);
% Time loop
for ts=1:TS
% Input 1
X{1,ts} = X{1,ts}';
Xp1 = mapminmax_apply(X{1,ts},x1_step1);
% Layer 1
a1 = tansig_apply(repmat(b1,1,Q) + IW1_1*Xp1);
% Layer 2
a2 = softmax_apply(repmat(b2,1,Q) + LW2_1*a1);
% Output 1
Y{1,ts} = a2;
Y{1,ts} = Y{1,ts}';
end
% Format Output Arguments
if ~isCellX, Y = cell2mat(Y); end
end
% ===== MODULE FUNCTIONS ========
% Map Minimum and Maximum Input Processing Function
function y = mapminmax_apply(x,settings)
y = bsxfun(#minus,x,settings.xoffset);
y = bsxfun(#times,y,settings.gain);
y = bsxfun(#plus,y,settings.ymin);
end
% Competitive Soft Transfer Function
function a = softmax_apply(n,~)
if isa(n,'gpuArray')
a = iSoftmaxApplyGPU(n);
else
a = iSoftmaxApplyCPU(n);
end
end
function a = iSoftmaxApplyCPU(n)
nmax = max(n,[],1);
n = bsxfun(#minus,n,nmax);
numerator = exp(n);
denominator = sum(numerator,1);
denominator(denominator == 0) = 1;
a = bsxfun(#rdivide,numerator,denominator);
end
function a = iSoftmaxApplyGPU(n)
nmax = max(n,[],1);
numerator = arrayfun(#iSoftmaxApplyGPUHelper1,n,nmax);
denominator = sum(numerator,1);
a = arrayfun(#iSoftmaxApplyGPUHelper2,numerator,denominator);
end
function numerator = iSoftmaxApplyGPUHelper1(n,nmax)
numerator = exp(n - nmax);
end
function a = iSoftmaxApplyGPUHelper2(numerator,denominator)
if (denominator == 0)
a = numerator;
else
a = numerator ./ denominator;
end
end
% Sigmoid Symmetric Transfer Function
function a = tansig_apply(n,~)
a = 2 ./ (1 + exp(-2*n)) - 1;
end
How can I generate C code for this function ?
You can generate standalone C code only from functions supported by code generation. If you really need C code because your environment doesn't support Matlab, then you have to manually convert unsupported functions or use coder.ceval in order to use external C code that implements the same functions.
In your example, you could replace arrayfun calls with traditional for-loops. In order to implement your own cell2mat code, just type open cell2mat so see the source code of the function and try to replicate its logics within your code.

Speed up calculation of maximum of normxcorr2

I need to calculate the maximum of normalized cross correlation of million of particles. The size of the two parameters of normxcorr2 is 56*56. I can't parallelize the calculations. Is there any suggestion to speed up the code especially that I don't need all the results but only the maximum value of each cross correlation (to know the displacement)?
Example of the algorithm
%The choice of 170 particles is because in each time
%the code detects 170 particles, so over 10000 images it's 1 700 000 particles
particle_1=rand(54,54,170);
particle_2=rand(56,56,170);
for i=1:170
C=normxcorr2(particle_1(:,:,i),particle_2(:,:,i));
L(i)=max(C(:));
end
I don't have MATLAB so I ran the following code on this site: https://www.tutorialspoint.com/execute_matlab_online.php which is actually octave. So I implemented "naive" normalized cross correlation and indeed for these small images sizes the naive performs better:
Elapsed time is 2.62645 seconds - for normxcorr2
Elapsed time is 0.199034 seconds - for my naive_normxcorr2
The code is based on the article http://scribblethink.org/Work/nvisionInterface/nip.pdf which describes how to calculate the standard deviation needed for the normalization in an efficient way using integral image, this is the box_corr function.
Also, MATLAB's normxcorr2 returns a padded image so I took the max on the unpadded part.
pkg load image
function [N] = naive_corr(pat,img)
[n,m] = size(img);
[np,mp] = size(pat);
N = zeros(n-np+1,m-mp+1);
for i = 1:n-np+1
for j = 1:m-mp+1
N(i,j) = sum(dot(pat,img(i:i+np-1,j:j+mp-1)));
end
end
end
%w_arr the array of coefficients for the boxes
%box_arr of size [k,4] where k is the number boxes, each box represented by
%4 something ...
function [C] = box_corr2(img,box_arr,w_arr,n_p,m_p)
% construct integral image + zeros pad (for boundary problems)
I = cumsum(cumsum(img,2),1);
I = [zeros(1,size(I,2)+2); [zeros(size(I,1),1) I zeros(size(I,1),1)]; zeros(1,size(I,2)+2)];
% initialize result matrix
[n,m] = size(img);
C = zeros(n-n_p+1,m-m_p+1);
%C = zeros(n,m);
jump_x = 1;
jump_y = 1;
x_start = ceil(n_p/2);
x_end = n-x_start+mod(n_p,2);
x_span = x_start:jump_x:x_end;
y_start = ceil(m_p/2);
y_end = m-y_start+mod(m_p,2);
y_span = y_start:jump_y:y_end;
arr_a = box_arr(:,1) - x_start;
arr_b = box_arr(:,2) - x_start+1;
arr_c = box_arr(:,3) - y_start;
arr_d = box_arr(:,4) - y_start+1;
% cumulate box responses
k = size(box_arr,1); % == numel(w_arr)
for i = 1:k
a = arr_a(i);
b = arr_b(i);
c = arr_c(i);
d = arr_d(i);
C = C ...
+ w_arr(i) * ( I(x_span+b,y_span+d) ...
- I(x_span+b,y_span+c) ...
- I(x_span+a,y_span+d) ...
+ I(x_span+a,y_span+c) );
end
end
function [NCC] = naive_normxcorr2(temp,img)
[n_p,m_p]=size(temp);
M = n_p*m_p;
% compute template mean & std
temp_mean = mean(temp(:));
temp = temp - temp_mean;
temp_std = sqrt(sum(temp(:).^2)/M);
% compute windows' mean & std
wins_mean = box_corr2(img,[1,n_p,1,m_p],1/M, n_p,m_p);
wins_mean2 = box_corr2(img.^2,[1,n_p,1,m_p],1/M,n_p,m_p);
wins_std = real(sqrt(wins_mean2 - wins_mean.^2));
NCC_naive = naive_corr(temp,img);
NCC = NCC_naive ./ (M .* temp_std .* wins_std);
end
n = 170;
particle_1=rand(54,54,n);
particle_2=rand(56,56,n);
[n_p1,m_p1,c_p1]=size(particle_1);
[n_p2,m_p2,c_p2]=size(particle_2);
L1 = zeros(n,1);
L2 = zeros (n,1);
tic
for i=1:n
C1=normxcorr2(particle_1(:,:,i),particle_2(:,:,i));
C1_unpadded = C1(n_p1:n_p2 , m_p1:m_p2);
L1(i)=max(C1_unpadded(:));
end
toc
tic
for i=1:n
C2=naive_normxcorr2(particle_1(:,:,i),particle_2(:,:,i));
L2(i)=max(C2(:));
end
toc

parfor doesn't consider information about vectors which are used in it

This is a part of my code in Matlab. I tried to make it parallel but there is an error:
The variable gax in a parfor cannot be classified.
I know why the error occurs. because I should tell Matlab that v is an incresing vector which doesn't contain repeated elements. Could anyone help me to use this information to parallelize the code?
v=[1,3,6,8];
ggx=5.*ones(15,14);
gax=ones(15,14);
for m=v
if m > 1
parfor j=1:m-1
gax(j,m-1) = ggx(j,m-1);
end
end
if m<nn
parfor jo=m+1:15
gax(jo,m) = ggx(jo,m);
end
end
end
Optimizing a code should be closely related to its purpose, especially when you use parfor. The code you wrote in the question can be written in a much more efficient way, and definitely, do not need to be parallelized.
However, I understand that you tried to simplify the problem, just to get the idea of how to slice your variables, so here is a fixed version the can run with parfor. But this is surely not the way to write this code:
v = [1,3,6,8];
ggx = 5.*ones(15,14);
gax = ones(15,14);
nn = 5;
for m = v
if m > 1
temp_end = m-1;
temp = ggx(:,temp_end);
parfor ja = 1:temp_end
gax(ja,temp_end) = temp(ja);
end
end
if m < nn
temp = ggx(:,m);
parfor jo = m+1:15
gax(jo,m) = temp(jo);
end
end
end
A vectorized implementation will look like this:
v = [1,3,6,8];
ggx = 5.*ones(15,14);
gax = ones(15,14);
nn = 5;
m1 = v>1; % first condition with logical indexing
temp = v(m1)-1; % get the values from v
r = ones(1,sum(temp)); % generate a vector of indicies
r(cumsum(temp)) = -temp+1; % place the reseting locations
r = cumsum(r); % calculate the indecies
r(cumsum(temp)) = temp; % place the ending points
c = repelem(temp,temp); % create an indecies vector for the columns
inds1 = sub2ind(size(gax),r,c); % convert the indecies to linear
mnn = v<nn; % second condition with logical indexing
temp = v(mnn)+1; % get the values from v
r_max = size(gax,1); % get the height of gax
r_count = r_max-temp+1; % calculate no. of rows per value in v
r = ones(1,sum(r_count)); % generate a vector of indicies
r([1 r_count(1:end-1)+1]) = temp; % set the t indicies
r(cumsum(r_count)+1) = -(r_count-temp)+1; % place the reseting locations
r = cumsum(r(1:end-1)); % calculate the indecies
c = repelem(temp-1,r_count); % create an indecies vector for the columns
inds2 = sub2ind(size(gax),r,c); % convert the indecies to linear
gax([inds1 inds2]) = ggx([inds1 inds2]); % assgin the relevant values
This is indeed quite complicated, and not always necessary. A good thing to remember, though, is that nested for loop are much slower than a single loop, so in some cases (depend on the size of the output), this will may be the fastest solution:
for m = v
if m > 1
gax(1:m-1,m-1) = ggx(1:m-1,m-1);
end
if m<nn
gax(m+1:15,m) = ggx(m+1:15,m);
end
end

How to find the exact elemental value in the loop?

I am writing a code to implement a basic optimization method to maintain minimum error between an observed and simulated profile. I am doing this with simple root mean square error. The whole code I have given here for convinence.
%% Vegetation profile along a hillslope
% Parameters
Tv = 1; % values are {1,2,3,5,10}
%Kvg = 0.3; % according to the Tv values Kvg=1/Tv
Kvg = 1/Tv;
Kvd = 0.5; % valuse are {0.1,0.5,1,5,10}
Tcv = 10; % values are {20,50,80,130,180}
q = (1/3);
Nv = (1:1:10); %(Kvd*Tcv)/Kvg; range is given for the parameter
m = 2/3;
n = 2/3;
%% Horizontal coordinate and length scale
x = 0:1:100;
L = 100;
x_non = x/L;
Ne =10; %L/Tcv;
%% Vegetation profile on the slope
v = zeros(length(x_non),length(Nv));
for i = 1:length(x_non)
for j = 1:length(Nv)
v(i,j) = (1/(1+Nv(j)*(Ne^q)*(x_non(i)^q)));
end
end
%% importing the excel file containg real dataset
obs_data = xlsread('model_data.xlsx'); % 1-distance, 2-elevation, 3- vegetation, 4-slope
%% root mean square error between real and synthetic dataset of vegetation
rmse_v = zeros(length(x_non),length(Nv));
err_v = zeros(1,length(Nv));
for j=1:length(Nv)
rmse_v(:,j)=(obs_data(:,3)-v(:,j)).^2;
err_v(1,j)= sqrt(mean(rmse_v(:,j)));
end
Now I have to extract the value of Nv for which the rmse_v is minimum. The Nv value for which the aforesaid statement is true has to be used in the next session.
Nt = (1:1:100);
% Slope profile on the x direction
s = zeros(length(x_non),length(Nt));
for i = 1:length(x_non)
for j = 1:length(Nt)
s(i,j)=((Ne^q)/Nt(j))*(x_non(i)^(q-m))+(1/(Nt(j)*(x_non(i)^m)*(1+Nv*(Ne^q)...
*(x_non(i)^q))))^(1/n);
end
end
I shall be grateful to get the way of writing the way to do the particular thing.
Thank you.
Regards

Optimizing repetitive estimation (currently a loop) in MATLAB

I've found myself needing to do a least-squares (or similar matrix-based operation) for every pixel in an image. Every pixel has a set of numbers associated with it, and so it can be arranged as a 3D matrix.
(This next bit can be skipped)
Quick explanation of what I mean by least-squares estimation :
Let's say we have some quadratic system that is modeled by Y = Ax^2 + Bx + C and we're looking for those A,B,C coefficients. With a few samples (at least 3) of X and the corresponding Y, we can estimate them by:
Arrange the (lets say 10) X samples into a matrix like X = [x(:).^2 x(:) ones(10,1)];
Arrange the Y samples into a similar matrix: Y = y(:);
Estimate the coefficients A,B,C by solving: coeffs = (X'*X)^(-1)*X'*Y;
Try this on your own if you want:
A = 5; B = 2; C = 1;
x = 1:10;
y = A*x(:).^2 + B*x(:) + C + .25*randn(10,1); % added some noise here
X = [x(:).^2 x(:) ones(10,1)];
Y = y(:);
coeffs = (X'*X)^-1*X'*Y
coeffs =
5.0040
1.9818
0.9241
START PAYING ATTENTION AGAIN IF I LOST YOU THERE
*MAJOR REWRITE*I've modified to bring it as close to the real problem that I have and still make it a minimum working example.
Problem Setup
%// Setup
xdim = 500;
ydim = 500;
ncoils = 8;
nshots = 4;
%// matrix size for each pixel is ncoils x nshots (an overdetermined system)
%// each pixel has a matrix stored in the 3rd and 4rth dimensions
regressor = randn(xdim,ydim, ncoils,nshots);
regressand = randn(xdim, ydim,ncoils);
So my problem is that I have to do a (X'*X)^-1*X'*Y (least-squares or similar) operation for every pixel in an image. While that itself is vectorized/matrixized the only way that I have to do it for every pixel is in a for loop, like:
Original code style
%// Actual work
tic
estimate = zeros(xdim,ydim);
for col=1:size(regressor,2)
for row=1:size(regressor,1)
X = squeeze(regressor(row,col,:,:));
Y = squeeze(regressand(row,col,:));
B = X\Y;
% B = (X'*X)^(-1)*X'*Y; %// equivalently
estimate(row,col) = B(1);
end
end
toc
Elapsed time = 27.6 seconds
EDITS in reponse to comments and other ideas
I tried some things:
1. Reshaped into a long vector and removed the double for loop. This saved some time.
2. Removed the squeeze (and in-line transposing) by permute-ing the picture before hand: This save alot more time.
Current example:
%// Actual work
tic
estimate2 = zeros(xdim*ydim,1);
regressor_mod = permute(regressor,[3 4 1 2]);
regressor_mod = reshape(regressor_mod,[ncoils,nshots,xdim*ydim]);
regressand_mod = permute(regressand,[3 1 2]);
regressand_mod = reshape(regressand_mod,[ncoils,xdim*ydim]);
for ind=1:size(regressor_mod,3) % for every pixel
X = regressor_mod(:,:,ind);
Y = regressand_mod(:,ind);
B = X\Y;
estimate2(ind) = B(1);
end
estimate2 = reshape(estimate2,[xdim,ydim]);
toc
Elapsed time = 2.30 seconds (avg of 10)
isequal(estimate2,estimate) == 1;
Rody Oldenhuis's way
N = xdim*ydim*ncoils; %// number of columns
M = xdim*ydim*nshots; %// number of rows
ii = repmat(reshape(1:N,[ncoils,xdim*ydim]),[nshots 1]); %//column indicies
jj = repmat(1:M,[ncoils 1]); %//row indicies
X = sparse(ii(:),jj(:),regressor_mod(:));
Y = regressand_mod(:);
B = X\Y;
B = reshape(B(1:nshots:end),[xdim ydim]);
Elapsed time = 2.26 seconds (avg of 10)
or 2.18 seconds (if you don't include the definition of N,M,ii,jj)
SO THE QUESTION IS:
Is there an (even) faster way?
(I don't think so.)
You can achieve a ~factor of 2 speed up by precomputing the transposition of X. i.e.
for x=1:size(picture,2) % second dimension b/c already transposed
X = picture(:,x);
XX = X';
Y = randn(n_timepoints,1);
%B = (X'*X)^-1*X'*Y; ;
B = (XX*X)^-1*XX*Y;
est(x) = B(1);
end
Before: Elapsed time is 2.520944 seconds.
After: Elapsed time is 1.134081 seconds.
EDIT:
Your code, as it stands in your latest edit, can be replaced by the following
tic
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
% Actual work
picture = randn(xdim,ydim,n_timepoints);
picture = reshape(picture, [xdim*ydim,n_timepoints])'; % note transpose
YR = randn(n_timepoints,size(picture,2));
% (XX*X).^-1 = sum(picture.*picture).^-1;
% XX*Y = sum(picture.*YR);
est = sum(picture.*picture).^-1 .* sum(picture.*YR);
est = reshape(est,[xdim,ydim]);
toc
Elapsed time is 0.127014 seconds.
This is an order of magnitude speed up on the latest edit, and the results are all but identical to the previous method.
EDIT2:
Okay, so if X is a matrix, not a vector, things are a little more complicated. We basically want to precompute as much as possible outside of the for-loop to keep our costs down. We can also get a significant speed-up by computing XT*X manually - since the result will always be a symmetric matrix, we can cut a few corners to speed things up. First, the symmetric multiplication function:
function XTX = sym_mult(X) % X is a 3-d matrix
n = size(X,2);
XTX = zeros(n,n,size(X,3));
for i=1:n
for j=i:n
XTX(i,j,:) = sum(X(:,i,:).*X(:,j,:));
if i~=j
XTX(j,i,:) = XTX(i,j,:);
end
end
end
Now the actual computation script
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
Y = randn(10,xdim*ydim);
picture = randn(xdim,ydim,n_timepoints); % 500x500x10
% Actual work
tic % start timing
picture = reshape(picture, [xdim*ydim,n_timepoints])';
% Here we precompute the (XT*Y) calculation to speed things up later
picture_y = [sum(Y);sum(Y.*picture)];
% initialize
est = zeros(size(picture,2),1);
picture = permute(picture,[1,3,2]);
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture);
XTX = sym_mult(XTX); % precompute (XT*X) for speed
X = zeros(2,2); % preallocate for speed
XY = zeros(2,1);
for x=1:size(picture,2) % second dimension b/c already transposed
%For some reason this is a lot faster than X = XTX(:,:,x);
X(1,1) = XTX(1,1,x);
X(2,1) = XTX(2,1,x);
X(1,2) = XTX(1,2,x);
X(2,2) = XTX(2,2,x);
XY(1) = picture_y(1,x);
XY(2) = picture_y(2,x);
% Here we utilise the fact that A\B is faster than inv(A)*B
% We also use the fact that (A*B)*C = A*(B*C) to speed things up
B = X\XY;
est(x) = B(1);
end
est = reshape(est,[xdim,ydim]);
toc % end timing
Before: Elapsed time is 4.56 seconds.
After: Elapsed time is 2.24 seconds.
This is a speed up of about a factor of 2. This code should be extensible to X being any dimensions you want. For instance, in the case where X = [1 x x^2], you would change picture_y to the following
picture_y = [sum(Y);sum(Y.*picture);sum(Y.*picture.^2)];
and change XTX to
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture,picture.^2);
You would also change a lot of 2s to 3s in the code, and add XY(3) = picture_y(3,x) to the loop. It should be fairly straight-forward, I believe.
Results
I sped up your original version, since your edit 3 was actually not working (and also does something different).
So, on my PC:
Your (original) version: 8.428473 seconds.
My obfuscated one-liner given below: 0.964589 seconds.
First, for no other reason than to impress, I'll give it as I wrote it:
%%// Some example data
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
estimate = zeros(xdim,ydim); %// initialization with explicit size
picture = randn(xdim,ydim,n_timepoints);
%%// Your original solution
%// (slightly altered to make my version's results agree with yours)
tic
Y = randn(n_timepoints,xdim*ydim);
ii = 1;
for x = 1:xdim
for y = 1:ydim
X = squeeze(picture(x,y,:)); %// or similar creation of X matrix
B = (X'*X)^(-1)*X' * Y(:,ii);
ii = ii+1;
%// sometimes you keep everything and do
%// estimate(x,y,:) = B(:);
%// sometimes just the first element is important and you do
estimate(x,y) = B(1);
end
end
toc
%%// My version
tic
%// UNLEASH THE FURY!!
estimate2 = reshape(sparse(1:xdim*ydim*n_timepoints, ...
builtin('_paren', ones(n_timepoints,1)*(1:xdim*ydim),:), ...
builtin('_paren', permute(picture, [3 2 1]),:))\Y(:), ydim,xdim).'; %'
toc
%%// Check for equality
max(abs(estimate(:)-estimate2(:))) % (always less than ~1e-14)
Breakdown
First, here's the version that you should actually use:
%// Construct sparse block-diagonal matrix
%// (Type "help sparse" for more information)
N = xdim*ydim; %// number of columns
M = N*n_timepoints; %// number of rows
ii = 1:N;
jj = ones(n_timepoints,1)*(1:N);
s = permute(picture, [3 2 1]);
X = sparse(ii,jj(:), s(:));
%// Compute ALL the estimates at once
estimates = X\Y(:);
%// You loop through the *second* dimension first, so to make everything
%// agree, we have to extract elements in the "wrong" order, and transpose:
estimate2 = reshape(estimates, ydim,xdim).'; %'
Here's an example of what picture and the corresponding matrix X looks like for xdim = ydim = n_timepoints = 2:
>> clc, picture, full(X)
picture(:,:,1) =
-0.5643 -2.0504
-0.1656 0.4497
picture(:,:,2) =
0.6397 0.7782
0.5830 -0.3138
ans =
-0.5643 0 0 0
0.6397 0 0 0
0 -2.0504 0 0
0 0.7782 0 0
0 0 -0.1656 0
0 0 0.5830 0
0 0 0 0.4497
0 0 0 -0.3138
You can see why sparse is necessary -- it's mostly zeros, but will grow large quickly. The full matrix would quickly consume all your RAM, while the sparse one will not consume much more than the original picture matrix does.
With this matrix X, the new problem
X·b = Y
now contains all the problems
X1 · b1 = Y1
X2 · b2 = Y2
...
where
b = [b1; b2; b3; ...]
Y = [Y1; Y2; Y3; ...]
so, the single command
X\Y
will solve all your systems at once.
This offloads all the hard work to a set of highly specialized, compiled to machine-specific code, optimized-in-every-way algorithms, rather than the interpreted, generic, always-two-steps-away from the hardware loops in MATLAB.
It should be straightforward to convert this to a version where X is a matrix; you'll end up with something like what blkdiag does, which can also be used by mldivide in exactly the same way as above.
I had a wee play around with an idea, and I decided to stick it as a separate answer, as its a completely different approach to my other idea, and I don't actually condone what I'm about to do. I think this is the fastest approach so far:
Orignal (unoptimised): 13.507176 seconds.
Fast Cholesky-decomposition method: 0.424464 seconds
First, we've got a function to quickly do the X'*X multiplication. We can speed things up here because the result will always be symmetric.
function XX = sym_mult(X)
n = size(X,2);
XX = zeros(n,n,size(X,3));
for i=1:n
for j=i:n
XX(i,j,:) = sum(X(:,i,:).*X(:,j,:));
if i~=j
XX(j,i,:) = XX(i,j,:);
end
end
end
The we have a function to do LDL Cholesky decomposition of a 3D matrix (we can do this because the (X'*X) matrix will always be symmetric) and then do forward and backwards substitution to solve the LDL inversion equation
function Y = fast_chol(X,XY)
n=size(X,2);
L = zeros(n,n,size(X,3));
D = zeros(n,n,size(X,3));
B = zeros(n,1,size(X,3));
Y = zeros(n,1,size(X,3));
% These loops compute the LDL decomposition of the 3D matrix
for i=1:n
D(i,i,:) = X(i,i,:);
L(i,i,:) = 1;
for j=1:i-1
L(i,j,:) = X(i,j,:);
for k=1:(j-1)
L(i,j,:) = L(i,j,:) - L(i,k,:).*L(j,k,:).*D(k,k,:);
end
D(i,j,:) = L(i,j,:);
L(i,j,:) = L(i,j,:)./D(j,j,:);
if i~=j
D(i,i,:) = D(i,i,:) - L(i,j,:).^2.*D(j,j,:);
end
end
end
for i=1:n
B(i,1,:) = XY(i,:);
for j=1:(i-1)
B(i,1,:) = B(i,1,:)-D(i,j,:).*B(j,1,:);
end
B(i,1,:) = B(i,1,:)./D(i,i,:);
end
for i=n:-1:1
Y(i,1,:) = B(i,1,:);
for j=n:-1:(i+1)
Y(i,1,:) = Y(i,1,:)-L(j,i,:).*Y(j,1,:);
end
end
Finally, we have the main script which calls all of this
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
Y = randn(10,xdim*ydim);
picture = randn(xdim,ydim,n_timepoints); % 500x500x10
tic % start timing
picture = reshape(pr, [xdim*ydim,n_timepoints])';
% Here we precompute the (XT*Y) calculation
picture_y = [sum(Y);sum(Y.*picture)];
% initialize
est2 = zeros(size(picture,2),1);
picture = permute(picture,[1,3,2]);
% Now we calculate the X'*X matrix
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture);
XTX = sym_mult(XTX);
% Call our fast Cholesky decomposition routine
B = fast_chol(XTX,picture_y);
est2 = B(1,:);
est2 = reshape(est2,[xdim,ydim]);
toc
Again, this should work equally well for a Nx3 X matrix, or however big you want.
I use octave, thus I can't say anything about the resulting performance in Matlab, but would expect this code to be slightly faster:
pictureT=picture'
est=arrayfun(#(x)( (pictureT(x,:)*picture(:,x))^-1*pictureT(x,:)*randn(n_ti
mepoints,1)),1:size(picture,2));