Precompute weights for multidimensional linear interpolation - matlab

I have a non-uniform rectangular grid along D dimensions, a matrix of logical values V on the grid, and a matrix of query data points X. The number of grid points differs across dimensions.
I run the interpolation multiple times for the same grid G and query X, but for different values V.
The goal is to precompute the indexes and weights for the interpolation and to reuse them, because they are always the same.
Here is an example in 2 dimensions, in which I have to compute indexes and values every time within the loop, but I want to compute them only once before the loop. I keep the data types from my application (mostly single and logical gpuArrays).
% Define grid
G{1} = single([0; 1; 3; 5; 10]);
G{2} = single([15; 17; 18; 20]);
% Steps and edges are reduntant but help make interpolation a bit faster
S{1} = G{1}(2:end)-G{1}(1:end-1);
S{2} = G{2}(2:end)-G{2}(1:end-1);
gpuInf = 1e10;
% It's my workaround for a bug in GPU version of discretize in Matlab R2017a.
% It throws an error if edges contain Inf, realmin, or realmax. Seems fixed in R2017b prerelease.
E{1} = [-gpuInf; G{1}(2:end-1); gpuInf];
E{2} = [-gpuInf; G{2}(2:end-1); gpuInf];
% Generate query points
n = 50; X = gpuArray(single([rand(n,1)*14-2, 14+rand(n,1)*7]));
[G1, G2] = ndgrid(G{1},G{2});
for i = 1 : 4
% Generate values on grid
foo = #(x1,x2) (sin(x1+rand) + cos(x2*rand))>0;
V = gpuArray(foo(G1,G2));
% Interpolate
V_interp = interpV(X, V, G, E, S);
% Plot results
subplot(2,2,i);
contourf(G1, G2, V); hold on;
scatter(X(:,1), X(:,2),50,[ones(n,1), 1-V_interp, 1-V_interp],'filled', 'MarkerEdgeColor','black'); hold off;
end
function y = interpV(X, V, G, E, S)
y = min(1, max(0, interpV_helper(X, 1, 1, 0, [], V, G, E, S) ));
end
function y = interpV_helper(X, dim, weight, curr_y, index, V, G, E, S)
if dim == ndims(V)+1
M = [1,cumprod(size(V),2)];
idx = 1 + (index-1)*M(1:end-1)';
y = curr_y + weight .* single(V(idx));
else
x = X(:,dim); grid = G{dim}; edges = E{dim}; steps = S{dim};
iL = single(discretize(x, edges));
weightL = weight .* (grid(iL+1) - x) ./ steps(iL);
weightH = weight .* (x - grid(iL)) ./ steps(iL);
y = interpV_helper(X, dim+1, weightL, curr_y, [index, iL ], V, G, E, S) +...
interpV_helper(X, dim+1, weightH, curr_y, [index, iL+1], V, G, E, S);
end
end

I found a way to do this and posting it here because (as of now) two more people are interested. It takes only a slight modification to my original code (see below).
% Define grid
G{1} = single([0; 1; 3; 5; 10]);
G{2} = single([15; 17; 18; 20]);
% Steps and edges are reduntant but help make interpolation a bit faster
S{1} = G{1}(2:end)-G{1}(1:end-1);
S{2} = G{2}(2:end)-G{2}(1:end-1);
gpuInf = 1e10;
% It's my workaround for a bug in GPU version of discretize in Matlab R2017a.
% It throws an error if edges contain Inf, realmin, or realmax. Seems fixed in R2017b prerelease.
E{1} = [-gpuInf; G{1}(2:end-1); gpuInf];
E{2} = [-gpuInf; G{2}(2:end-1); gpuInf];
% Generate query points
n = 50; X = gpuArray(single([rand(n,1)*14-2, 14+rand(n,1)*7]));
[G1, G2] = ndgrid(G{1},G{2});
[W, I] = interpIW(X, G, E, S); % Precompute weights W and indexes I
for i = 1 : 4
% Generate values on grid
foo = #(x1,x2) (sin(x1+rand) + cos(x2*rand))>0;
V = gpuArray(foo(G1,G2));
% Interpolate
V_interp = sum(W .* single(V(I)), 2);
% Plot results
subplot(2,2,i);
contourf(G1, G2, V); hold on;
scatter(X(:,1), X(:,2), 50,[ones(n,1), 1-V_interp, 1-V_interp],'filled', 'MarkerEdgeColor','black'); hold off;
end
function [W, I] = interpIW(X, G, E, S)
global Weights Indexes
Weights=[]; Indexes=[];
interpIW_helper(X, 1, 1, [], G, E, S, []);
W = Weights; I = Indexes;
end
function [] = interpIW_helper(X, dim, weight, index, G, E, S, sizeV)
global Weights Indexes
if dim == size(X,2)+1
M = [1,cumprod(sizeV,2)];
Weights = [Weights, weight];
Indexes = [Indexes, 1 + (index-1)*M(1:end-1)'];
else
x = X(:,dim); grid = G{dim}; edges = E{dim}; steps = S{dim};
iL = single(discretize(x, edges));
weightL = weight .* (grid(iL+1) - x) ./ steps(iL);
weightH = weight .* (x - grid(iL)) ./ steps(iL);
interpIW_helper(X, dim+1, weightL, [index, iL ], G, E, S, [sizeV, size(grid,1)]);
interpIW_helper(X, dim+1, weightH, [index, iL+1], G, E, S, [sizeV, size(grid,1)]);
end
end

To do the task the whole process of interpolation ,except computing the interpolated values, should be done. Here is a solution translated from the Octave c++ source. Format of the input is the same as the frst signature of the interpn function except that there is no need to the v array. Also Xs should be vectors and should not be of the ndgrid format. Both the outputs W (weights) and I (positions) have the size (a ,b) that a is the number of neighbors of a points on the grid and b is the number of requested points to be interpolated.
function [W , I] = lininterpnw(varargin)
% [W I] = lininterpnw(X1,X2,...,Xn,Xq1,Xq2,...,Xqn)
n = numel(varargin)/2;
x = varargin(1:n);
y = varargin(n+1:end);
sz = cellfun(#numel,x);
scale = [1 cumprod(sz(1:end-1))];
Ni = numel(y{1});
index = zeros(n,Ni);
x_before = zeros(n,Ni);
x_after = zeros(n,Ni);
for ii = 1:n
jj = interp1(x{ii},1:sz(ii),y{ii},'previous');
index(ii,:) = jj-1;
x_before(ii,:) = x{ii}(jj);
x_after(ii,:) = x{ii}(jj+1);
end
coef(2:2:2*n,1:Ni) = (vertcat(y{:}) - x_before) ./ (x_after - x_before);
coef(1:2:end,:) = 1 - coef(2:2:2*n,:);
bit = permute(dec2bin(0:2^n-1)=='1', [2,3,1]);
%I = reshape(1+scale*bsxfun(#plus,index,bit), Ni, []).'; %Octave
I = reshape(1+sum(bsxfun(#times,scale(:),bsxfun(#plus,index,bit))), Ni, []).';
W = squeeze(prod(reshape(coef(bsxfun(#plus,(1:2:2*n).',bit),:).',Ni,n,[]),2)).';
end
Testing:
x={[1 3 8 9],[2 12 13 17 25]};
v = rand(4,5);
y={[1.5 1.6 1.3 3.5,8.1,8.3],[8.4,13.5,14.4,23,23.9,24.2]};
[W I]=lininterpnw(x{:},y{:});
sum(W.*v(I))
interpn(x{:},v,y{:})
Thanks to #SardarUsama for testing and his useful comments.

Related

Unable to perform assignment because the size of the left side is 1-by-7-by-7 and the size of the right side is 6-by-6 [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am looking for a way to find same eigenvectors for 2 given matrices, this way I would make a joint diagonalisation. For this, I found out and tried to use qndiag (from https://github.com/pierreablin/qndiag.git ) from the following function :
function [D, B, infos] = qndiag(C, varargin)
% Joint diagonalization of matrices using the quasi-Newton method
%
% The algorithm is detailed in:
%
% P. Ablin, J.F. Cardoso and A. Gramfort. Beyond Pham’s algorithm
% for joint diagonalization. Proc. ESANN 2019.
% https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2019-119.pdf
% https://hal.archives-ouvertes.fr/hal-01936887v1
% https://arxiv.org/abs/1811.11433
%
% The function takes as input a set of matrices of size `(p, p)`, stored as
% a `(n, p, p)` array, `C`. It outputs a `(p, p)` array, `B`, such that the
% matrices `B * C(i,:,:) * B'` are as diagonal as possible.
%
% There are several optional parameters which can be provided in the
% varargin variable.
%
% Optional parameters:
% --------------------
% 'B0' Initial point for the algorithm.
% If absent, a whitener is used.
% 'weights' Weights for each matrix in the loss:
% L = sum(weights * KL(C, C')).
% No weighting (weights = 1) by default.
% 'maxiter' (int) Maximum number of iterations to perform.
% Default : 1000
%
% 'tol' (float) A positive scalar giving the tolerance at
% which the algorithm is considered to have converged.
% The algorithm stops when |gradient| < tol.
% Default : 1e-6
%
% lambda_min (float) A positive regularization scalar. Each
% eigenvalue of the Hessian approximation below
% lambda_min is set to lambda_min.
%
% max_ls_tries (int), Maximum number of line-search tries to
% perform.
%
% return_B_list (bool) Chooses whether or not to return the list
% of iterates.
%
% verbose (bool) Prints informations about the state of the
% algorithm if True.
%
% Returns
% -------
% D : Set of matrices jointly diagonalized
% B : Estimated joint diagonalizer matrix.
% infos : structure containing monitoring informations, containing the times,
% gradient norms and objective values.
%
% Example:
% --------
%
% [D, B] = qndiag(C, 'maxiter', 100, 'tol', 1e-5)
%
% Authors: Pierre Ablin <pierre.ablin#inria.fr>
% Alexandre Gramfort <alexandre.gramfort#inria.fr>
%
% License: MIT
% First tests
if nargin == 0,
error('No signal provided');
end
if length(size(C)) ~= 3,
error('Input C should be 3 dimensional');
end
if ~isa (C, 'double'),
fprintf ('Converting input data to double...');
X = double(X);
end
% Default parameters
C_mean = squeeze(mean(C, 1));
[p, d] = eigs(C_mean);
p = fliplr(p);
d = flip(diag(d));
B = p' ./ repmat(sqrt(d), 1, size(p, 1));
max_iter = 1000;
tol = 1e-6;
lambda_min = 1e-4;
max_ls_tries = 10;
return_B_list = false;
verbose = false;
weights = [];
% Read varargin
if mod(length(varargin), 2) == 1,
error('There should be an even number of optional parameters');
end
for i = 1:2:length(varargin)
param = lower(varargin{i});
value = varargin{i + 1};
switch param
case 'B0'
B0 = value;
case 'max_iter'
max_iter = value;
case 'tol'
tol = value;
case 'weights'
weights = value / mean(value(:));
case 'lambda_min'
lambda_min = value;
case 'max_ls_tries'
max_ls_tries = value;
case 'return_B_list'
return_B_list = value;
case 'verbose'
verbose = value;
otherwise
error(['Parameter ''' param ''' unknown'])
end
end
[n_samples, n_features, ~] = size(C);
D = transform_set(B, C, false);
current_loss = NaN;
% Monitoring
if return_B_list
B_list = []
end
t_list = [];
gradient_list = [];
loss_list = [];
if verbose
print('Running quasi-Newton for joint diagonalization');
print('iter | obj | gradient');
end
for t=1:max_iter
if return_B_list
B_list(k) = B;
end
diagonals = zeros(n_samples, n_features);
for k=1:n_samples
diagonals(k, :) = diag(squeeze(D(k, :, :)));
end
% Gradient
if isempty(weights)
G = squeeze(mean(bsxfun(#rdivide, D, ...
reshape(diagonals, n_samples, n_features, 1)), ...
1)) - eye(n_features);
else
G = squeeze(mean(...
bsxfun(#times, ...
reshape(weights, n_samples, 1, 1), ...
bsxfun(#rdivide, D, ...
reshape(diagonals, n_samples, n_features, 1))), ...
1)) - eye(n_features);
end
g_norm = norm(G);
if g_norm < tol
break
end
% Hessian coefficients
if isempty(weights)
h = mean(bsxfun(#rdivide, ...
reshape(diagonals, n_samples, 1, n_features), ...
reshape(diagonals, n_samples, n_features, 1)), 1);
else
h = mean(bsxfun(#times, ...
reshape(weights, n_samples, 1, 1), ...
bsxfun(#rdivide, ...
reshape(diagonals, n_samples, 1, n_features), ...
reshape(diagonals, n_samples, n_features, 1))), ...
1);
end
h = squeeze(h);
% Quasi-Newton's direction
dt = h .* h' - 1.;
dt(dt < lambda_min) = lambda_min; % Regularize
direction = -(G .* h' - G') ./ dt;
% Line search
[success, new_D, new_B, new_loss, direction] = ...
linesearch(D, B, direction, current_loss, max_ls_tries, weights);
D = new_D;
B = new_B;
current_loss = new_loss;
% Monitoring
gradient_list(t) = g_norm;
loss_list(t) = current_loss;
if verbose
print(sprintf('%d - %.2e - %.2e', t, current_loss, g_norm))
end
end
infos = struct();
infos.t_list = t_list;
infos.gradient_list = gradient_list;
infos.loss_list = loss_list;
if return_B_list
infos.B_list = B_list
end
end
function [op] = transform_set(M, D, diag_only)
[n, p, ~] = size(D);
if ~diag_only
op = zeros(n, p, p);
for k=1:length(D)
op(k, :, :) = M * squeeze(D(k, :, :)) * M';
end
else
op = zeros(n, p);
for k=1:length(D)
op(k, :) = sum(M .* (squeeze(D(k, :, :)) * M'), 1);
end
end
end
function [v] = slogdet(A)
v = log(abs(det(A)));
end
function [out] = loss(B, D, is_diag, weights)
[n, p, ~] = size(D);
if ~is_diag
diagonals = zeros(n, p);
for k=1:n
diagonals(k, :) = diag(squeeze(D(k, :, :)));
end
else
diagonals = D;
end
logdet = -slogdet(B);
if ~isempty(weights)
diagonals = bsxfun(#times, diagonals, reshape(weights, n, 1));
end
out = logdet + 0.5 * sum(log(diagonals(:))) / n;
end
function [success, new_D, new_B, new_loss, delta] = linesearch(D, B, direction, current_loss, n_ls_tries, weights)
[n, p, ~] = size(D);
step = 1.;
if current_loss == NaN
current_loss = loss(B, D, false);
end
success = false;
for n=1:n_ls_tries
M = eye(p) + step * direction;
new_D = transform_set(M, D, true);
new_B = M * B;
new_loss = loss(new_B, new_D, true, weights);
if new_loss < current_loss
success = true;
break
end
step = step / 2;
end
new_D = transform_set(M, D, false);
delta = step * direction;
end
I use it with the following script that you can test with downloading the 2 input matrices at the bottom of this post :
clc; clear
m=7 % dimension
n=2 % number of matrices
% Load spectro and WL+GCph+XC
FISH_GCsp = load('Fisher_GCsp_flat.txt');
FISH_XC = load('Fisher_XC_GCph_WL_flat.txt');
% Marginalizing over uncommon parameters between the two matrices
COV_GCsp_first = inv(FISH_GCsp);
COV_XC_first = inv(FISH_XC);
COV_GCsp = COV_GCsp_first(1:m,1:m);
COV_XC = COV_XC_first(1:m,1:m);
% Invert to get Fisher matrix
FISH_sp = inv(COV_GCsp);
FISH_xc = inv(COV_XC);
% Drawing a random set of commuting matrices
C=zeros(n,m,m);
B0=zeros(n,m,m);
C(1,:,:) = FISH_sp
C(2,:,:) = FISH_xc
%[D, B] = qndiag(C, 'max_iter', 1e6, 'tol', 1e-6);
[D, B] = qndiag(C);
% Print diagonal matrices
B*C(1,:,:)*B'
B*C(2,:,:)*B'
But unfortunately, I get the following error :
Unable to perform assignment because the size of the left side is 1-by-7-by-7 and the size of the
right side is 6-by-6.
Error in qndiag>transform_set (line 224)
op(k, :, :) = M * squeeze(D(k, :, :)) * M';
Error in qndiag (line 128)
D = transform_set(B, C, false);
Error in compute_joint_diagonalization (line 32)
[D, B] = qndiag(C);
I don't understand the utility of function squeeze the most important : why the function eigs returns only 6 values and not 7 like in my data (the input matrices has 7x7 size).
What might be wrong with this issue of array dimension and how can I fix it ?
I put the 2 input files available here :
Matrix Fisher_GCsp_flat.txt
Matrix Fisher_XC_GCph_WL_flat.txt
You can test the above code that calls qndiag for these 2 matrices.
Update 1
To allow people interested to test quickly the code, I put a link of the archive:
Archive_Matlab_StackOver.tar
You just have to untar and execute under Matlab the script compute_joint_diagonalization.m and you will see normally the above error (regarding the use of eigs and squeeze functions).
It should help you understand the origin of this issue.
Update 2
If I replace [p, d] = eigs(C_mean) by [p, d] = eigs(C_mean,7) , I get another error :
Index in position 1 exceeds array bounds (must not exceed 2).
Error in qndiag>transform_set (line 224)
op(k, :, :) = M * squeeze(D(k, :, :)) * M';
Error in qndiag (line 128)
D = transform_set(B, C, false);
Error in compute_joint_diagonalization (line 27)
[D, B] = qndiag(C);
However, the dimensions of the 2 matrices used are 7x7 and should be correctly processed with eigs(C_mean,7).
Update 3
The size of op, D, M and k are equal to (including after the error message) :
size(D) =
2 7 7
length(D) =
7
size(M) =
7 7
size(op) =
2 7 7
Index in position 1 exceeds array bounds (must not exceed 2).
Error in qndiag>transform_set (line 231)
op(k, :, :) = M * squeeze(D(k, :, :)) * M';
Error in qndiag (line 128)
D = transform_set(B, C, false);
Error in compute_joint_diagonalization (line 27)
[D, B] = qndiag(C);
Notice that k varies from 1 to length(D)=7.
Is there an issue which could appear with these dimensions ?
From the documentation for eigs:
d = eigs(A) returns a vector of the six largest magnitude eigenvalues of matrix A.
If you want all seven, you need to call d = eigs(A,7) or d = eig(A). For a small matrix (e.g. < 1000 x 1000) it's usually easier to just get all the eigenvalues with eig, rather than get a subset with eigs.
Edit: Responding to your "Update 3"
for k=1:length(D) should be replaced by for k=1:n. This needs to be changed on two lines. Judging from your error message they are lines 231 and 236.
L = length(X) returns the length of the largest array dimension in X, which in your case is 7, i.e. too high for the first dimension.

Preventing using for loop in MATLAB

I have written the below MATLAB code. I want to know how can I optimize it without using for loop.
Any help will be very appreciated.
MATLAB code:
%Some parameters:
s = 50;
k = 50;
r = 0.1;
v = 0.2;
t = 2;
n=10000;
% Calculate CT by calling EurCall function
CT = EurCall(s, k, r, v, t, n);
%Function EurCall to be called
function C = EurCall(s, k, r, v, t, n)
X = zeros(n,1);
hh = zeros(n,1);
for ii = 1 : n
X(ii) = normrnd(0, 1);
SS = s*exp((r - v^2/2)*t + v*X(ii)*sqrt(t));
hh(ii) = exp(-r*t)*max(SS - k, 0);
end %end for loop
C = (1/n) * sum(hh);
end %end function
Vectorized Approach:
Here is a vectorized approach that I think replicates the same functionality as the original script. Instead of looping this example declares X as a vector of size n by 1. By using element-wise multiplication .* we can effectively calculate the remaining vectors SS and hh without need to loop through the indices. In this case SS and hh will also be vectors of size n by 1. I do agree with comment above that MATLAB's for-loops are no longer inherently slow.
%Some parameters:
s = 50;
k = 50;
r = 0.1;
v = 0.2;
t = 2;
n=10000;
% Calculate CT by calling EurCall function
[CT] = EurCall(s, k, r, v, t, n);
%Function EurCall to be called
function [C] = EurCall(s, k, r, v, t, n)
X = zeros(n,1);
hh = zeros(n,1);
mu = 0; sigma = 1;
%Creating a vector of normal random numbers of size (n by 1)%
X = normrnd(mu,sigma,[n 1]);
SS = s*exp((r - v^2/2)*t + v.*X.*sqrt(t));
hh = exp(-r*t)*max(SS - k, 0);
C = (1/n) * sum(hh);
end %end function
Ran using MATLAB R2019b

Error using feval Undefined function or variable 'Sfun'

I have always used R, so I am quite new to Matlab and running into some troubleshooting issues. I am running some code for a tensor factorization method (available here: https://github.com/caobokai/tBNE). To start I tried to run the demo code, which generates simulated data to run the method with, which results in the following error(s):
Error using feval
Undefined function or variable 'Sfun'.
Error in OptStiefelGBB (line 199)
[F, G] = feval(fun, X , varargin{:}); out.nfe = 1;
Error in tbne_demo>tBNE_fun (line 124)
S, #Sfun, opts, B, P, X, L, D, W, Y, alpha, beta);
Here is the block of code I am running:
clear
clc
addpath(genpath('./tensor_toolbox'));
addpath(genpath('./FOptM'));
rng(5489, 'twister');
m = 10;
n = 10;
k = 10; % rank for tensor
[X, Z, Y] = tBNE_data(m, n, k); % generate the tensor, guidance and label
[T, W] = tBNE_fun(X, Z, Y, k);
[~, y1] = max(Y, [], 2);
[~, y2] = max(T{3} * W, [], 2);
fprintf('accuracy %3.2e\n', sum(y1 == y2) / n);
function [X, Z, Y] = tBNE_data(m, n, k)
B = randn(m, k);
S = randn(n, k);
A = {B, B, S};
X = ktensor(A);
Z = randn(n, 4);
Y = zeros(n, 2);
l = ceil(n / 2);
Y(1 : l, 1) = 1;
Y(l + 1 : end, 2) = 1;
X = tensor(X);
end
function [T, W] = tBNE_fun(X, Z, Y, k)
% t-BNE computes brain network embedding based on constrained tensor factorization
%
% INPUT
% X: brain networks stacked in a 3-way tensor
% Z: side information
% Y: label information
% k: rank of CP factorization
%
% OUTPUT
% T is the factor tensor containing
% vertex factor matrix B = T{1} and
% subject factor matrix S = T{3}
% W is the weight matrix
%
% Example: see tBNE_demo.m
%
% Reference:
% Bokai Cao, Lifang He, Xiaokai Wei, Mengqi Xing, Philip S. Yu,
% Heide Klumpp and Alex D. Leow. t-BNE: Tensor-based Brain Network Embedding.
% In SDM 2017.
%
% Dependency:
% [1] Matlab tensor toolbox v 2.6
% Brett W. Bader, Tamara G. Kolda and others
% http://www.sandia.gov/~tgkolda/TensorToolbox
% [2] A feasible method for optimization with orthogonality constraints
% Zaiwen Wen and Wotao Yin
% http://www.math.ucla.edu/~wotaoyin/papers/feasible_method_matrix_manifold.html
%% set algorithm parameters
printitn = 10;
maxiter = 200;
fitchangetol = 1e-4;
alpha = 0.1; % weight for guidance
beta = 0.1; % weight for classification loss
gamma = 0.1; % weight for regularization
u = 1e-6;
umax = 1e6;
rho = 1.15;
opts.record = 0;
opts.mxitr = 20;
opts.xtol = 1e-5;
opts.gtol = 1e-5;
opts.ftol = 1e-8;
%% compute statistics
dim = size(X);
normX = norm(X);
numClass = size(Y, 2);
m = dim(1);
n = dim(3);
l = size(Y, 1);
D = [eye(l), zeros(l, n - l)];
L = diag(sum(Z * Z')) - Z * Z';
%% initialization
B = randn(m, k);
P = B;
S = randn(n, k);
S = orth(S);
W = randn(k, numClass);
U = zeros(m, k); % Lagrange multipliers
%% main loop
fit = 0;
for iter = 1 : maxiter
fitold = fit;
% update B
ete = (S' * S) .* (P' * P); % compute E'E
b = 2 * ete + u * eye(k);
c = 2 * mttkrp(X, {B, P, S}, 1) + u * P + U;
B = c / b;
% update P
ftf = (S' * S) .* (B' * B); % compute F'F
b = 2 * ftf + u * eye(k);
c = 2 * mttkrp(X, {B, P, S}, 2) + u * B - U;
P = c / b;
% update U
U = U + u * (P - B);
% update u
u = min(rho * u, umax);
% update S
tic;
[S, out] = OptStiefelGBB(...
S, #Sfun, opts, B, P, X, L, D, W, Y, alpha, beta);
tsolve = toc;
fprintf(...
['[S]: obj val %7.6e, cpu %f, #func eval %d, ', ...
'itr %d, |ST*S-I| %3.2e\n'], ...
out.fval, tsolve, out.nfe, out.itr, norm(S' * S - eye(k), 'fro'));
% update W
H = D * S;
W = (H' * H + gamma * eye(k)) \ H' * Y;
% compute the fit
T = ktensor({B, P, S});
normresidual = sqrt(normX ^ 2 + norm(T) ^ 2 - 2 * innerprod(X, T));
fit = 1 - (normresidual / normX);
fitchange = abs(fitold - fit);
if mod(iter, printitn) == 0
fprintf(' Iter %2d: fitdelta = %7.1e\n', iter, fitchange);
end
% check for convergence
if (iter > 1) && (fitchange < fitchangetol)
break;
end
end
%% clean up final results
T = arrange(T); % columns are normalized
fprintf('factorization error %3.2e\n', fit);
end
I know that there is little context here, but my suspicion is that I need to have Simulink, as Sfun is a Simulink related function(?). The script requires two toolboxes: tensor_toolbox, and FOptM.
Available at:
https://www.sandia.gov/~tgkolda/TensorToolbox/index-2.6.html
https://github.com/andland/FOptM
Thank you so much for your help,
Paul
Although SFun is an often used abbreviation for a Simulink S-Function, in this case the error has nothing to do with Simulink, and the name is a coincidence. (There is no Simulink related function specifically called Sfun, it is just a general term.)
Your error message has #Sfun in it, which is a way in MATLAB of creating a function handle to an (m-code) function called Sfun. I'd summize from the code you've shown that this is a cost function used in the optimization.
If you look at the code that your code is based on (tBNE_fun.m) you'll see that there is a function at the end of the file called Sfun. It is this that you are missing.

Normalizing a histogram in matlab

I have a histogram
hist(A, 801)
that currently resembles a normal curve but with max value at y = 1500, and mean at x = 0.5. I want to normalize it, so I tried
h = hist(A, 801)
h = h ./ sum(h)
bar(h)
now I get a normal curve with max at y = .03, but a mean at x = 450.
how do I decrease the frequency so the sum is 1, while retaining the same x range?
A is derived from
A = walk(50000, 800, .05, 2, .25, 0)
where
function [X_new] = walk(N_sim, N, mu, T, sigma, X_init)
delt = T/N;
up = sigma*sqrt(delt);
down = -sigma*sqrt(delt);
p = 1./2.*(1.+mu/sigma*sqrt(delt));
X_new = zeros(N_sim,1);
X_new(1:N_sim,1) = X_init;
ptest = zeros(N_sim,1);
for i = 1:N
ptest(:,1) = rand(N_sim,1);
ptest(:,1) = (ptest(:,1) <= p);
X_new(:,1) = X_new(:,1) + ptest(:,1)*up + (1.-ptest(:,1))*down;
end
The sum is 1 with your code as it stands.
You may want integral equal to 1 (so that you can compare with the theoretical pdf). In that case:
[h, c] = hist(A, 801); %// c contains bin centers. They are equally spaced
h = h / sum(h) / (c(2)-c(1)); %// normalize to area 1
trapz(c,h) %// compute integral. Should be approximately 1

Least squares circle fitting using MATLAB Optimization Toolbox

I am trying to implement least squares circle fitting following this paper (sorry I can't publish it). The paper states, that we could fit a circle, by calculating the geometric error as the euclidean distance (Xi'') between a specific point (Xi) and the corresponding point on the circle (Xi'). We have three parametres: Xc (a vector of coordinates the center of circle), and R (radius).
I came up with the following MATLAB code (note that I am trying to fit circles, not spheres as it is indicated on the images):
function [ circle ] = fit_circle( X )
% Kör paraméterstruktúra inicializálása
% R - kör sugara
% Xc - kör középpontja
circle.R = NaN;
circle.Xc = [ NaN; NaN ];
% Kezdeti illesztés
% A köz középpontja legyen a súlypont
% A sugara legyen az átlagos négyzetes távolság a középponttól
circle.Xc = mean( X );
d = bsxfun(#minus, X, circle.Xc);
circle.R = mean(bsxfun(#hypot, d(:,1), d(:,2)));
circle.Xc = circle.Xc(1:2)+random('norm', 0, 1, size(circle.Xc));
% Optimalizáció
options = optimset('Jacobian', 'on');
out = lsqnonlin(#ort_error, [circle.Xc(1), circle.Xc(2), circle.R], [], [], options, X);
end
%% Cost function
function [ error, J ] = ort_error( P, X )
%% Calculate error
R = P(3);
a = P(1);
b = P(2);
d = bsxfun(#minus, X, P(1:2)); % X - Xc
n = bsxfun(#hypot, d(:,1), d(:,2)); % || X - Xc ||
res = d - R * bsxfun(#times,d,1./n);
error = zeros(2*size(X,1), 1);
error(1:2:2*size(X,1)) = res(:,1);
error(2:2:2*size(X,1)) = res(:,2);
%% Jacobian
xdR = d(:,1)./n;
ydR = d(:,2)./n;
xdx = bsxfun(#plus,-R./n+(d(:,1).^2*R)./n.^3,1);
ydy = bsxfun(#plus,-R./n+(d(:,2).^2*R)./n.^3,1);
xdy = (d(:,1).*d(:,2)*R)./n.^3;
ydx = xdy;
J = zeros(2*size(X,1), 3);
J(1:2:2*size(X,1),:) = [ xdR, xdx, xdy ];
J(2:2:2*size(X,1),:) = [ ydR, ydx, ydy ];
end
The fitting however is not too good: if I start with the good parameter vector the algorithm terminates at the first step (so there is a local minima where it should be), but if I perturb the starting point (with a noiseless circle) the fitting stops with very large errors. I am sure that I've overlooked something in my implementation.
For what it's worth, I implemented these methods in MATLAB a while ago. However, I did it apparently before I knew about lsqnonlin etc, as it uses a hand-implemented regression. This is probably slow, but may help to compare with your code.
function [x, y, r, sq_error] = circFit ( P )
%# CIRCFIT fits a circle to a set of points using least sqaures
%# P is a 2 x n matrix of points to be fitted
per_error = 0.1/100; % i.e. 0.1%
%# initial estimates
X = mean(P, 2)';
r = sqrt(mean(sum((repmat(X', [1, length(P)]) - P).^2)));
v_cen2points = zeros(size(P));
niter = 0;
%# looping until convergence
while niter < 1 || per_diff > per_error
%# vector from centre to each point
v_cen2points(1, :) = P(1, :) - X(1);
v_cen2points(2, :) = P(2, :) - X(2);
%# distacnes from centre to each point
centre2points = sqrt(sum(v_cen2points.^2));
%# distances from edge of circle to each point
d = centre2points - r;
%# computing 3x3 jacobean matrix J, and solvign matrix eqn.
R = (v_cen2points ./ [centre2points; centre2points])';
J = [ -ones(length(R), 1), -R ];
D_rXY = -J\d';
%# updating centre and radius
r_old = r; X_old = X;
r = r + D_rXY(1);
X = X + D_rXY(2:3)';
%# calculating maximum percentage change in values
per_diff = max(abs( [(r_old - r) / r, (X_old - X) ./ X ])) * 100;
%# prevent endless looping
niter = niter + 1;
if niter > 1000
error('Convergence not met in 1000 iterations!')
end
end
x = X(1);
y = X(2);
sq_error = sum(d.^2);
This is then run with:
X = [1 2 5 7 9 3];
Y = [7 6 8 7 5 7];
[x_centre, y_centre, r] = circFit( [X; Y] )
And plotted with:
[X, Y] = cylinder(r, 100);
scatter(X, Y, 60, '+r'); axis equal
hold on
plot(X(1, :) + x_centre, Y(1, :) + y_centre, '-b', 'LineWidth', 1);
Giving: