Multilevel Otsu Threshold Matlab - matlab

I have tried to implement the Mutlilevel Otsu Threshold Algorithm but my values does not correspond to the ones from the matlab internal function, maybe I have mistake.
This is the standard Algorithm:
Here is my Code:
function [T] = MultiLevelOtsu(img, n)
% Calculate Histogram of Image
H = imhist(uint8(img));
a = size(H, 2);
b = size(H, 1);
T = [zeros(1, n)];
% Step 1 : Repeat recursivly Step 2-6 n/2-1 times
for i = 1:n/2-1
% Step 2 : Define & Update Range
R = [a:b]';
% Step 3 : Find mean & std of all pixels in R
mu = mean(R);
sigma = std(R);
% Step 4 : Sub-ranges T1 & T2
T1 = mu - 1*sigma;
T2 = mu + 1*sigma;
% Step 5 : Threshold Calculation
T(i) = floor((ceil(otsuthresh(H(a:T1))*256)+ceil(otsuthresh(H(T2:b))*256))/2);
% Step 6 : Update a & b
a = T1+1;
b = T2-1;
end
% Step 7 : Update Step 5
T1 = mu;
T2 = mu+1;
T = floor((ceil(otsuthresh(H(a:T1))*256)+ceil(otsuthresh(H(T2:b))*256))/2);
end

Related

how run kmean algorithm on sift keypoint in matlab

I need to run the K-Means algorithm on the key points of the Sift algorithm in MATLAB .I want to cluster the key points in the image but I do not know how to do it.
First, put the key points into X with x coordinates in the first column and y coordinates in the second column like this
X=[reshape(keypxcoord,numel(keypxcoord),1),reshape(keypycoord,numel(keypycoord),1))]
Then if you have the statistical toolbox, you can just use the built in 'kmeans' function lik this
output = kmeans(X,num_clusters)
Otherwise, write your own kmeans function:
function [ min_group, mu ] = mykmeans( X,K )
%MYKMEANS
% X = N obervations of D element vectors
% K = number of centroids
assert(K > 0);
D = size(X,1); %No. of r.v.
N = size(X,2); %No. of observations
group_size = zeros(1,K);
min_group = zeros(1,N);
step = 0;
%% init centroids
mu = kpp(X,K);
%% 2-phase iterative approach (local then global)
while step < 400
%% phase 1, batch update
old_group = min_group;
% computing distances
d2 = distances2(X,mu);
% reassignment all points to closest centroids
[~, min_group] = min(d2,[],1);
% recomputing centroids (K number of means)
for k = 1 : K
group_size(k) = sum(min_group==k);
% check empty group
%if group_size(k) == 0
assert(group_size(k)>0);
%else
mu(:,k) = sum(X(:,min_group==k),2)/group_size(k);
%end
end
changed = sum(min_group ~= old_group);
p1_converged = changed <= N*0.001;
%% phase 2, individual update
changed = 0;
for n = 1 : N
d2 = distances2(X(:,n),mu);
[~, new_group] = min(d2,[],1);
% recomputing centroids of affected groups
k = min_group(n);
if (new_group ~= k)
mu(:,k)=(mu(:,k)*group_size(k)-X(:,n));
group_size(k) = group_size(k) - 1;
mu(:,k)=mu(:,k)/group_size(k);
mu(:,new_group) = mu(:,new_group)*group_size(new_group)+ X(:,n);
group_size(new_group) = group_size(new_group) + 1;
mu(:,new_group)=mu(:,new_group)/group_size(new_group);
min_group(n) = new_group;
changed = changed + 1;
end
end
%% check convergence
if p1_converged && changed <= N*0.001
break;
else
step = step + 1;
end
end
end
function d2 = distances2(X, mu)
K = size(mu,2);
N = size(X,2);
d2 = zeros(K,N);
for j = 1 : K
d2(j,:) = sum((X - repmat(mu(:,j),1,N)).^2,1);
end
end
function mu = kpp( X,K )
% kmeans++ init
D = size(X,1); %No. of r.v.
N = size(X,2); %No. of observations
mu = zeros(D, K);
mu(:,1) = X(:,round(rand(1) * (size(X, 2)-1)+1));
for k = 2 : K
% computing distances between centroids and observations
d2 = distances2(X, mu(1:k-1));
% assignment
[min_dist, ~] = min(d2,[],1);
% select new centroids by selecting point with the cumulative dist
% value (distance) larger than random value (falls in range between
% dist(n-1) : dist(n), dist(0)= 0)
rv = sum(min_dist) * rand(1);
for n = 1 : N
if min_dist(n) >= rv
mu(:,k) = X(:,n);
break;
else
rv = rv - min_dist(n);
end
end
end
end

Fast approach in matlab to estimate linear regression with AR terms

I am trying to estimate regression and AR parameters for (loads of) linear regressions with AR error terms. (You could also think of this as a MA process with exogenous variables):
, where
, with lags of length p
I am following the official matlab recommendations and use regArima to set up a number of regressions and extract regression and AR parameters (see reproducible example below).
The problem: regArima is slow! For 5 regressions, matlab needs 14.24sec. And I intend to run a large number of different regression models. Is there any quicker method around?
y = rand(100,1);
r2 = rand(100,1);
r3 = rand(100,1);
r4 = rand(100,1);
r5 = rand(100,1);
exo = [r2 r3 r4 r5];
tic
for p = 0:4
Mdl = regARIMA(3,0,0);
[EstMdl, ~, LogL] = estimate(Mdl,y,'X',exo,'Display','off');
end
toc
Unlike the regArima function which uses Maximum Likelihood, the Cochrane-Orcutt prodecure relies on an iteration of OLS regression. There are a few more particularities when this approach is valid (refer to the link posted). But for the aim of this question, the appraoch is valid, and fast!
I modified James Le Sage's code which covers only AR lags of order 1, to cover lags of order p.
function result = olsc(y,x,arterms)
% PURPOSE: computes Cochrane-Orcutt ols Regression for AR1 errors
%---------------------------------------------------
% USAGE: results = olsc(y,x)
% where: y = dependent variable vector (nobs x 1)
% x = independent variables matrix (nobs x nvar)
%---------------------------------------------------
% RETURNS: a structure
% results.meth = 'olsc'
% results.beta = bhat estimates
% results.rho = rho estimate
% results.tstat = t-stats
% results.trho = t-statistic for rho estimate
% results.yhat = yhat
% results.resid = residuals
% results.sige = e'*e/(n-k)
% results.rsqr = rsquared
% results.rbar = rbar-squared
% results.iter = niter x 3 matrix of [rho converg iteration#]
% results.nobs = nobs
% results.nvar = nvars
% results.y = y data vector
% --------------------------------------------------
% SEE ALSO: prt_reg(results), plt_reg(results)
%---------------------------------------------------
% written by:
% James P. LeSage, Dept of Economics
% University of Toledo
% 2801 W. Bancroft St,
% Toledo, OH 43606
% jpl#jpl.econ.utoledo.edu
% do error checking on inputs
if (nargin ~= 3); error('Wrong # of arguments to olsc'); end;
[nobs nvar] = size(x);
[nobs2 junk] = size(y);
if (nobs ~= nobs2); error('x and y must have same # obs in olsc'); end;
% ----- setup parameters
ITERMAX = 100;
converg = 1.0;
rho = zeros(arterms,1);
iter = 1;
% xtmp = lag(x,1);
% ytmp = lag(y,1);
% truncate 1st observation to feed the lag
% xlag = x(1:nobs-1,:);
% ylag = y(1:nobs-1,1);
yt = y(1+arterms:nobs,1);
xt = x(1+arterms:nobs,:);
xlag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
xlag(:,nvar*(tt-1)+1:nvar*(tt-1)+nvar) = x(arterms-tt+1:nobs-tt,:);
end
ylag = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
ylag(:,tt) = y(arterms-tt+1:nobs-tt,:);
end
% setup storage for iteration results
iterout = zeros(ITERMAX,3);
while (converg > 0.0001) & (iter < ITERMAX),
% step 1, using intial rho = 0, do OLS to get bhat
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*beta;
% truncate 1st observation to account for the lag
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
% step 2, update estimate of rho using residuals
% from step 1
res_rho = (elagt'*elagt)\elagt' * et;
rho_last = rho;
rho = res_rho;
converg = sum(abs(rho - rho_last));
% iterout(iter,1) = rho;
iterout(iter,2) = converg;
iterout(iter,3) = iter;
iter = iter + 1;
end; % end of while loop
if iter == ITERMAX
% error('ols_corc did not converge in 100 iterations');
print('ols_corc did not converge in 100 iterations');
end;
result.iter= iterout(1:iter-1,:);
% after convergence produce a final set of estimates using rho-value
ystar = yt - ylag*rho;
xstar = zeros(nobs-arterms,nvar);
for ii = 1 : nvar
tmp = zeros(1,arterms);
for tt = 1:arterms
tmp(1,tt)=ii+nvar*(tt-1);
end
xstar(:,ii) = xt(:,ii) - xlag(:,tmp)*rho;
end
result.beta = (xstar'*xstar)\xstar' * ystar;
e = y - x*result.beta;
et = e(1+arterms:nobs,1);
elagt = zeros(nobs-arterms,arterms);
for tt = 1 : arterms
elagt(:,tt) = e(arterms-tt+1:nobs-tt,:);
end
u = et - elagt*rho;
result.vare = std(u)^2;
result.meth = 'olsc';
result.rho = rho;
result.iter = iterout(1:iter-1,:);
% % compute t-statistic for rho
% varrho = (1-rho*rho)/(nobs-2);
% result.trho = rho/sqrt(varrho);
(I did not adapt in the last 2 lines the t-test for rho vectors of length p, but this should be straight forward to do..)

PDF and CDF plot for central limit theorem using Matlab

I am struggling to plot the PDF and CDF graphs of where
Sn=X1+X2+X3+....+Xn
using central limit theorem where n = 1; 2; 3; 4; 5; 10; 20; 40
I am taking Xi to be a uniform continuous random variable for values between (0,3).
Here is what i have done so far -
close all
%different sizes of input X
%N=[1 5 10 50];
N = [1 2 3 4 5 10 20 40];
%interval (1,6) for random variables
a=0;
b=3;
%to store sum of differnet sizes of input
for i=1:length(N)
%generates uniform random numbers in the interval
X = a + (b-a).*rand(N(i),1);
S=zeros(1,length(X));
S=cumsum(X);
cd=cdf('Uniform',S,0,3);
plot(cd);
hold on;
end
legend('n=1','n=2','n=3','n=4','n=5','n=10','n=20','n=40');
title('CDF PLOT')
figure;
for i=1:length(N)
%generates uniform random numbers in the interval
X = a + (b-a).*rand(N(i),1);
S=zeros(1,length(X));
S=cumsum(X);
cd=pdf('Uniform',S,0,3);
plot(cd);
hold on;
end
legend('n=1','n=2','n=3','n=4','n=5','n=10','n=20','n=40');
title('PDF PLOT')
My output is nowhere near what I am expecting any help is much appreciated.
This can be done with vectorization using rand() and cumsum().
For example, the code below generates 40 replications of 10000 samples of a Uniform(0,3) distribution and stores in X. To meet the Central Limit Theorem (CLT) assumptions, they are independent and identically distributed (i.i.d.). Then cumsum() transforms this into 10000 copies of the Sn = X1 + X2 + ... where the first row is n = 10000copies of Sn = X1, the 5th row is n copies of S_5 = X1 + X2 + X3 + X4 + X5. The last row is n copies of S_40.
% MATLAB R2019a
% Setup
N = [1:5 10 20 40]; % values of n we are interested in
LB = 0; % lowerbound for X ~ Uniform(LB,UB)
UB = 3; % upperbound for X ~ Uniform(LB,UB)
n = 10000; % Number of copies (samples) for each random variable
% Generate random variates
X = LB + (UB - LB)*rand(max(N),n); % X ~ Uniform(LB,UB) (i.i.d.)
Sn = cumsum(X);
You can see from the image that the n = 2 case, the sum is indeed a Triangular(0,3,6) distribution. For the n = 40 case, the sum is approximately Normally distributed (Gaussian) with mean 60 (40*mean(X) = 40*1.5 = 60). This shows the convergence in distribution for both the probability density function (PDF) and the cumulative distribution function (CDF).
Note: The CLT is often stated with convergence in distribution to a Normal distribution with zero mean as it has been shifted. Shifting the results by subtracting mean(Sn) = n*mean(X) = n*0.5*(LB+UB) from Sn gets this done.
Code below isn't the gold standard but it produced the image.
figure
s(11) = subplot(6,2,1) % n = 1
histogram(Sn(1,:),'Normalization','pdf')
title(s(11),'n = 1')
s(12) = subplot(6,2,2)
cdfplot(Sn(1,:))
title(s(12),'n = 1')
s(21) = subplot(6,2,3) % n = 2
histogram(Sn(2,:),'Normalization','pdf')
title(s(21),'n = 2')
s(22) = subplot(6,2,4)
cdfplot(Sn(2,:))
title(s(22),'n = 2')
s(31) = subplot(6,2,5) % n = 5
histogram(Sn(5,:),'Normalization','pdf')
title(s(31),'n = 5')
s(32) = subplot(6,2,6)
cdfplot(Sn(5,:))
title(s(32),'n = 5')
s(41) = subplot(6,2,7) % n = 10
histogram(Sn(10,:),'Normalization','pdf')
title(s(41),'n = 10')
s(42) = subplot(6,2,8)
cdfplot(Sn(10,:))
title(s(42),'n = 10')
s(51) = subplot(6,2,9) % n = 20
histogram(Sn(20,:),'Normalization','pdf')
title(s(51),'n = 20')
s(52) = subplot(6,2,10)
cdfplot(Sn(20,:))
title(s(52),'n = 20')
s(61) = subplot(6,2,11) % n = 40
histogram(Sn(40,:),'Normalization','pdf')
title(s(61),'n = 40')
s(62) = subplot(6,2,12)
cdfplot(Sn(40,:))
title(s(62),'n = 40')
sgtitle({'PDF (left) and CDF (right) for Sn with n \in \{1, 2, 5, 10, 20, 40\}';'note different axis scales'})
for tgt = [11:10:61 12:10:62]
xlabel(s(tgt),'Sn')
if rem(tgt,2) == 1
ylabel(s(tgt),'pdf')
else % rem(tgt,2) == 0
ylabel(s(tgt),'cdf')
end
end
Key functions used for plot: histogram() from base MATLAB and cdfplot() from the Statistics toolbox. Note this could be done manually without requiring the Statistics toolbox with a few lines to obtain the cdf and then just calling plot().
There was some concern in comments over the variance of Sn.
Note the variance of Sn is given by (n/12)*(UB-LB)^2 (derivation below). Monte Carlo simulation shows our samples of Sn do have the correct variance; indeed, it converges to this as n gets larger. Simply call var(Sn(40,:)).
% with n = 10000
var(Sn(40,:)) % var(S_40) = 30 (will vary slightly depending on random seed)
(40/12)*((UB-LB)^2) % 29.9505
You can see the convergence is very good by S_40:
step = 0.01;
Domain = 40:step:80;
mu = 40*(LB+UB)/2;
sigma = sqrt((40/12)*((UB-LB)^2));
figure, hold on
histogram(Sn(40,:),'Normalization','pdf')
plot(Domain,normpdf(Domain,mu,sigma),'r-','LineWidth',1.4)
ylabel('pdf')
xlabel('S_n')
Derivation of mean and variance for Sn:
For the expectation (mean), the second equality holds by linearity of expectation. The third equality holds since X_i are identically distributed.
The discrete version of this is posted here.

Neural Network Backpropagation Algorithm Implementation

I implemented a Neural Network Back propagation Algorithm in MATLAB, however is is not training correctly. The training data is a matrix X = [x1, x2], dimension 2 x 200 and I have a target matrix T = [target1, target2], dimension 2 x 200. The first 100 columns in T can be [1; -1] for class 1, and the second 100 columns in T can be [-1; 1] for class 2.
theta = 0.1; % criterion to stop
eta = 0.1; % step size
Nh = 10; % number of hidden nodes
For some reason the total training error is always 1.000, it never goes close to the theta, so it runs forever.
I used the following formulas:
The total training error:
The code is well documented below. I would appreciate any help.
clear;
close all;
clc;
%%('---------------------')
%%('Generating dummy data')
%%('---------------------')
d11 = [2;2]*ones(1,70)+2.*randn(2,70);
d12 = [-2;-2]*ones(1,30)+randn(2,30);
d1 = [d11,d12];
d21 = [3;-3]*ones(1,50)+randn([2,50]);
d22 = [-3;3]*ones(1,50)+randn([2,50]);
d2 = [d21,d22];
hw5_1 = d1;
hw5_2 = d2;
save hw5.mat hw5_1 hw5_2
x1 = hw5_1;
x2 = hw5_2;
% step 1: Construct training data matrix X=[x1,x2], dimension 2x200
training_data = [x1, x2];
% step 2: Construct target matrix T=[target1, target2], dimension 2x200
target1 = repmat([1; -1], 1, 100); % class 1
target2 = repmat([-1; 1], 1, 100); % class 2
T = [target1, target2];
% step 3: normalize training data
training_data = training_data - mean(training_data(:));
training_data = training_data / std(training_data(:));
% step 4: specify parameters
theta = 0.1; % criterion to stop
eta = 0.1; % step size
Nh = 10; % number of hidden nodes, actual hidden nodes should be 11 (including a biase)
Ni = 2; % dimension of input vector = number of input nodes, actual input nodes should be 3 (including a biase)
No = 2; % number of class = number of out nodes
% step 5: Initialize the weights
a = -1/sqrt(No);
b = +1/sqrt(No);
inputLayerToHiddenLayerWeight = (b-a).*rand(Ni, Nh) + a
hiddenLayerToOutputLayerWeight = (b-a).*rand(Nh, No) + a
J = inf;
p = 1;
% activation function
% f(net) = a*tanh(b*net),
% f'(net) = a*b*sech2(b*net)
a = 1.716;
b = 2/3;
while J > theta
% step 6: randomly choose one training sample vector from X,
% together with its target vector
k = randi([1, size(training_data, 2)]);
input_X = training_data(:,k);
input_T = T(:,k);
% step 7: Calculate net_j values for hidden nodes in layer 1
% hidden layer output before activation function applied
netj = inputLayerToHiddenLayerWeight' * input_X;
% step 8: Calculate hidden node output Y using activation function
% apply activation function to hidden layer neurons
Y = a*tanh(b*netj);
% step 9: Calculate net_k values for output nodes in layer 2
% output later output before activation function applied
netk = hiddenLayerToOutputLayerWeight' * Y;
% step 10: Calculate output node output Z using the activation function
% apply activation function to the output layer neurons
Z = a*tanh(b*netk);
% step 11: Calculate sensitivity delta_k = (target - Z) * f'(Z)
% find the error between the expected_output and the neuron output
% we got using the weights
% delta_k = (expected - output) * activation(output)
delta_k = [];
for i=1:size(Z)
yi = Z(i,:);
expected_output = input_T(i,:);
delta_k = [delta_k; (expected_output - yi) ...
* a*b*(sech(b*yi)).^2];
end
% step 12: Calculate sensitivity
% delta_j = Sum_k(delta_k * hidden-to-out weights) * f'(net_j)
% error = (weight_k * error_j) * activation(output)
delta_j = [];
for j=1:size(Y)
yi = Y(j,:);
error = 0;
for k=1:size(delta_k)
error = error + delta_k(k,:)*hiddenLayerToOutputLayerWeight(j, k);
end
delta_j = [delta_j; error * (a*b*(sech(b*yi)).^2)];
end
% step 13: update weights
%2x10
inputLayerToHiddenLayerWeight = [];
for i=1:size(input_X)
xi = input_X(i,:);
wji = [];
for j=1:size(delta_j)
wji = [wji, eta * xi * delta_j(j,:)];
end
inputLayerToHiddenLayerWeight = [inputLayerToHiddenLayerWeight; wji];
end
inputLayerToHiddenLayerWeight
%10x2
hiddenLayerToOutputLayerWeight = [];
for j=1:size(Y)
yi = Y(j,:);
wjk = [];
for k=1:size(delta_k)
wjk = [wjk, eta * delta_k(k,:) * yi];
end
hiddenLayerToOutputLayerWeight = [hiddenLayerToOutputLayerWeight; wjk];
end
hiddenLayerToOutputLayerWeight
% Mean Square Error
J = 0;
for j=1:size(training_data, 2)
X = training_data(:,j);
t = T(:,j);
netj = inputLayerToHiddenLayerWeight' * X;
Y = a*tanh(b*netj);
netk = hiddenLayerToOutputLayerWeight' * Y;
Z = a*tanh(b*netk);
J = J + immse(t, Z);
end
J = J/size(training_data, 2)
p = p + 1;
if p == 4
break;
end
end
% testing neural network using the inputs
test_data = [[2; -2], [-3; -3], [-2; 5], [3; -4]];
for i=1:size(test_data, 2)
end
Weight decay isn't essential for Neural Network training.
What I did notice was that your feature normalization wasn't correct.
The correct algorthim for scaling data to the range of 0 to 1 is
(max - x) / (max - min)
Note: you apply this for every element within the array (or vector). Data inputs for NN need to be within the range of [0,1]. (Technically they can be a little bit outside of that ~[-3,3] but values furthur from 0 make training difficult)
edit*
I am unaware of this activation function
a = 1.716;
b = 2/3;
% f(net) = a*tanh(b*net),
% f'(net) = a*b*sech2(b*net)
It sems like a variation on tanh.
Could you elaborate what it is?
If you're net still doesn't work give me an update and I'll look at your code more closely.

Can't recover the parameters of a model using ode45

I am trying to simulate the rotation dynamics of a system. I am testing my code to verify that it's working using simulation, but I never recovered the parameters I pass to the model. In other words, I can't re-estimate the parameters I chose for the model.
I am using MATLAB for that and specifically ode45. Here is my code:
% Load the input-output data
[torque outputs] = DataLogs2();
u = torque;
% using the simulation data
Ixx = 1.00;
Iyy = 2.00;
Izz = 3.00;
x0 = [0; 0; 0];
Ts = .02;
t = 0:Ts:Ts * ( length(u) - 1 );
[ T, x ] = ode45( #(t,x) rotationDyn( t, x, u(1+floor(t/Ts),:), Ixx, Iyy, Izz), t, x0 );
w = x';
N = length(w);
q = 1; % a counter for the A and B matrices
% The Algorithm
for k=1:1:N
w_telda = [ 0 -w(3, k) w(2,k); ...
w(3,k) 0 -w(1,k); ...
-w(2,k) w(1,k) 0 ];
if k == N % to handle the problem of the last iteration
w_dash(:,k) = (-w(:,k))/Ts;
else
w_dash(:,k) = (w(:,k+1)-w(:,k))/Ts;
end
a = kron( w_dash(:,k)', eye(3) ) + kron( w(:,k)', w_telda );
A(q:q+2,:) = a; % a 3N*9 matrix
B(q:q+2,:) = u(k,:)'; % a 3N*1 matrix % u(:,k)
q = q + 3;
end
% Forcing J to be diagonal. This is the case when we consider our quadcopter as two thin uniform
% rods crossed at the origin with a point mass (motor) at the end of each.
A_new = [A(:, 1) A(:, 5) A(:, 9)];
vec_J_diag = A_new\B;
J_diag = diag([vec_J_diag(1), vec_J_diag(2), vec_J_diag(3)])
eigenvalues_J_diag = eig(J_diag)
error = norm(A_new*vec_J_diag - B)
where my dynamic model is defined as:
function [dw, y] = rotationDyn(t, w, tau, Ixx, Iyy, Izz, varargin)
% The output equation
y = [w(1); w(2); w(3)];
% State equation
% dw = (I^-1)*( tau - cross(w, I*w) );
dw = [Ixx^-1 * tau(1) - ((Izz-Iyy)/Ixx)*w(2)*w(3);
Iyy^-1 * tau(2) - ((Ixx-Izz)/Iyy)*w(1)*w(3);
Izz^-1 * tau(3) - ((Iyy-Ixx)/Izz)*w(1)*w(2)];
end
Practically, what this code should do, is to calculate the eigenvalues of the inertia matrix, J, i.e. to recover Ixx, Iyy, and Izz that I passed to the model at the very begining (1, 2 and 3), but all what I get is wrong results.
Is the problem with using ode45?
Well the problem wasn't in the ode45 instruction, the problem is that in system identification one can create an n-1 samples-signal from an n samples-signal, thus the loop has to end at N-1 in the above code.