Why does my Markov cluster algorithm (MCL) produce NaN as a result in Matlab? - matlab

I have tried the Markov cluster algorithm (MCL) in Matalb but sadly I got a matrix of size 67*67 and all their elements have NaN.
Could anyone tell me what's wrong?
function adjacency_matrixmod2= mcl(adjacency_matrixmod2)
% test the explanations in stijn van dongens thesis.
%
% #author gregor :: arbylon . net
if nargin < 1
% m contains T(G3 + I) as stochastic matrix
load -ascii adjacency_matrixmod2.txt
end
p = 2;
minval = 0.001;
e = 1.;
emax = 0.001;
while e > emax
fprintf('iteration %i before expansion:\n', i);
adjacency_matrixmod2
fprintf('iteration %i after expansion/before inflation:\n', i);
m2 = expand(adjacency_matrixmod2)
fprintf('inflation:\n')
[adjacency_matrixmod2, e] = inflate(m2, p, minval);
fprintf('residual energy: %f\n', e);
end % while e
end % mcl
% expand by multiplying m * m
% this preserves column (or row) normalisation
function m2 = expand(adjacency_matrixmod2)
m2 = adjacency_matrixmod2 *adjacency_matrixmod2;
end
% inflate by Hadamard potentiation
% and column re-normalisation
% prune elements of m that are below minval
function [m2, energy] = inflate(adjacency_matrixmod2, p, minval)
% inflation
m2 = adjacency_matrixmod2 .^ p;
% pruning
m2(find(m2 < minval)) = 0;
% normalisation
dinv = diag(1./sum(m2));
m2 = m2 * dinv;
% calculate residual energy
maxs = max(m2);
sqsums = sum(m2 .^ 2);
energy = max(maxs - sqsums);
end
This is the code I have used, and its input is an adjacency matrix.

Related

filter Kalman in Matlab

There is a error that I can't solve, here is the code in matlab:
% Define the system matrices A, C, and the initial state
A = [1 1
0 1]; % state transition matrix
C = [1 0]; % measurement matrix
x0 = [1 0]; % initial state
% Define the number of time steps and the noise covariance matrices
N = 50; % number of time steps
Q_values = 0.1; % values of Q to consider
R_values = 0.25; % values of R to consider
% Pre-allocate memory for the states and estimated states
x = zeros(2, N); % real state
xhat = zeros(2, N); % state estimate
xpred = zeros(2, N); % prediction state
% Loop over each value of Q and R
for i = 1:length(Q_values)
for j = 1:length(R_values)
Q = Q_values(i) * eye(2); % covariance of process noise
R = R_values(j) * eye(2); % covariance of measurement noise
% Initialize the state estimate and its covariance
xhat(:, 1) = x0;
P = eye(2); % initial covariance of state estimate
% Generate the real state and measurements with noise
for k = 1:N
w = sqrt(Q) * randn(2, 1); % process noise
v = sqrt(R) * randn(1, 1); % measurement noise
x(:, k+1) = A * x(:, k) + w; % real state
yk = C * x(:, k) + v; % measurement
% Predict the state and covariance at time k
xpred(:, k+1) = A * xhat(:, k); % state prediction
P_pred = A * P * A' + Q; % covariance prediction
% Compute the Kalman gain
K = P_pred * C' / (C * P_pred * C');
% Update the state estimate and covariance
xhat(:, k+1) = A * xpred(:, k+1) + K .* (yk - C .* xpred(:, k+1));
P = (eye(2) - K * C) * P_pred;
end
end
error is here:
Unable to perform assignment because the size of the left side is 2-by-1 and the size of the right side
is 2-by-2.
Error in Untitled (line 42)
xhat(:, k+1) = A * xpred(:, k+1) + K .* (yk - C .* xpred(:, k+1));
Do you guys have some ideas how to change my code?

Matlab can't solve the collocation equations. Encountered a singular jacobian

function bvp()
solinit = bvpinit(linspace(0,1,10),[0, 0.3, 0, 0.5]);
sol = bvp4c(#myprojec,#mybounds, solinit);
plot(sol.x,sol.y(1,:),'.');
%Hold on can be used to suspend the graph so as to plot another graph on
%the same axis with different parameters
xlabel('\c');
ylabel('\theta');
function dydx = myprojec(x,y)
% enter the parameter
% x represents c
% y(i) represents f,theta
% f = y(1), f' = y(2), theta = y(4), theta' = y(5)
% star (*) replaced by 0
% infinity =5
% Max infinity = 10
% B = input('input the value of Beta:');
B = 5;
% M = input('input the value of M:');
M = 1;
% Km = input('input the value of Km:');
Km = 1.5;
% Br = input ('input the value of Br:');
Br = 12;
dydx=[ y(2);(B^2-M)*y(1)-Km;y(4);Br*((M- B^2)*y(1)-y(2)^2)];
% Now d bcs
function res = mybounds(ya,yb)
% where to denote initial values of y at zero(0)
% finf denotes initial vlue of y at infinity (5 or 10)
% n = input('input the value of n:');
res = [ya(1)
yb(1)-1
ya(4)
yb(4)-(1)];
The error I got is that Matlab cannot solve the collocation equations. A singular Jacobian encountered. I would appreciate your suggestions.

Can I use t-SNE when the dimension is larger than the number of data?

I am using t-SNE with the matlab code from this web site (https://lvdmaaten.github.io/tsne/). However, there is an error whenever I run this program with the data's dimension is larger than the number of data. The code below is the code I use currently and the error is always occurs here
M = M(:,ind(1:initial_dims));
the error is
Index exceeds matrix dimensions.
Error in tsne (line 62)
M = M(:,ind(1:initial_dims));
I call this tsne function with the command in the matlab
output = tsne(input, [], 2, 640, 30);
The input size is (162x640), the dimension is 640 and the number of data is 162. The program below is the code from the website above.
function ydata = tsne(X, labels, no_dims, initial_dims, perplexity)
%TSNE Performs symmetric t-SNE on dataset X
%
% mappedX = tsne(X, labels, no_dims, initial_dims, perplexity)
% mappedX = tsne(X, labels, initial_solution, perplexity)
%
% The function performs symmetric t-SNE on the NxD dataset X to reduce its
% dimensionality to no_dims dimensions (default = 2). The data is
% preprocessed using PCA, reducing the dimensionality to initial_dims
% dimensions (default = 30). Alternatively, an initial solution obtained
% from an other dimensionality reduction technique may be specified in
% initial_solution. The perplexity of the Gaussian kernel that is employed
% can be specified through perplexity (default = 30). The labels of the
% data are not used by t-SNE itself, however, they are used to color
% intermediate plots. Please provide an empty labels matrix [] if you
% don't want to plot results during the optimization.
% The low-dimensional data representation is returned in mappedX.
%
%
% (C) Laurens van der Maaten, 2010
% University of California, San Diego
if ~exist('labels', 'var')
labels = [];
end
if ~exist('no_dims', 'var') || isempty(no_dims)
no_dims = 2;
end
if ~exist('initial_dims', 'var') || isempty(initial_dims)
initial_dims = min(50, size(X, 2));
end
if ~exist('perplexity', 'var') || isempty(perplexity)
perplexity = 30;
end
% First check whether we already have an initial solution
if numel(no_dims) > 1
initial_solution = true;
ydata = no_dims;
no_dims = size(ydata, 2);
perplexity = initial_dims;
else
initial_solution = false;
end
% Normalize input data
X = X - min(X(:));
X = X / max(X(:));
X = bsxfun(#minus, X, mean(X, 1));
% Perform preprocessing using PCA
if ~initial_solution
disp('Preprocessing data using PCA...');
if size(X, 2) < size(X, 1)
C = X' * X;
else
C = (1 / size(X, 1)) * (X * X');
end
[M, lambda] = eig(C);
[lambda, ind] = sort(diag(lambda), 'descend');
M = M(:,ind(1:initial_dims));
lambda = lambda(1:initial_dims);
if ~(size(X, 2) < size(X, 1))
M = bsxfun(#times, X' * M, (1 ./ sqrt(size(X, 1) .* lambda))');
end
X = bsxfun(#minus, X, mean(X, 1)) * M;
clear M lambda ind
end
% Compute pairwise distance matrix
sum_X = sum(X .^ 2, 2);
D = bsxfun(#plus, sum_X, bsxfun(#plus, sum_X', -2 * (X * X')));
% Compute joint probabilities
P = d2p(D, perplexity, 1e-5); % compute affinities using fixed perplexity
clear D
% Run t-SNE
if initial_solution
ydata = tsne_p(P, labels, ydata);
else
ydata = tsne_p(P, labels, no_dims);
end
I am trying to understand this code but I cannot understand the part where the error occurs.
if size(X, 2) < size(X, 1)
C = X' * X;
else
C = (1 / size(X, 1)) * (X * X');
end
Why this condition is needed? Since the size of 'X' is (162x640), the else statement will be executed. I guess this is the problem. In the else statement, the size of 'C' will be (162x162). However, in the next line
M = M(:,ind(1:initial_dims));
the 'initial_dims' which equals to 640 is used. Did I used this code in a wrong way? Or is it just not available to the data set I use?
According to the document:
The data is preprocessed using PCA, reducing the dimensionality to initial_dims dimensions (default = 30). So, you should leave this parameter unchanged at first time.
The condition if size(X, 2) < size(X, 1) is used to formulate the matrix for economy SVD, so that the size of covariance matrix will be smaller, which leads to faster computation.

Matlab: Error in inverse operation when implementing Kalman filter

I am trying to implement the basic Equations for Kalman filter for the following 1 dimensional AR model:
x(t) = a_1x(t-1) + a_2x(t-2) + w(t)
y(t) = Cx(t) + v(t);
The KF state space model :
x(t+1) = Ax(t) + w(t)
y(t) = Cx(t) + v(t)
w(t) = N(0,Q)
v(t) = N(0,R)
where
% A - state transition matrix
% C - observation (output) matrix
% Q - state noise covariance
% R - observation noise covariance
% x0 - initial state mean
% P0 - initial state covariance
%%% Matlab script to simulate data and process usiung Kalman for the state
%%% estimation of AR(2) time series.
% Linear system representation
% x_n+1 = A x_n + Bw_n
% y_n = Cx_n + v_n
% w = N(0,Q); v = N(0,R)
clc
clear all
T = 100; % number of data samples
order = 2;
% True coefficients of AR model
a1 = 0.195;
a2 = -0.95;
A = [ a1 a2;
0 1 ];
C = [ 1 0 ];
B = [1;
0];
x =[ rand(order,1) zeros(order,T-1)];
sigma_2_w =1; % variance of the excitation signal for driving the AR model(process noise)
sigma_2_v = 0.01; % variance of measure noise
Q=eye(order);
P=Q;
%Simulate AR model time series, x;
sqrtW=sqrtm(sigma_2_w);
%simulation of the system
for t = 1:T-1
x(:,t+1) = A*x(:,t) + B*sqrtW*randn(1,1);
end
%noisy observation
y = C*x + sqrt(sigma_2_v)*randn(1,T);
R=sigma_2_v*diag(diag(x));
R = diag(R);
z = zeros(1,length(y));
z = y;
x0=mean(y);
for i=1:T-1
[xpred, Ppred] = predict(x0,P, A, Q);
[nu, S] = innovation(xpred, Ppred, z(i), C, R);
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
end
%plot
xhat = xnew';
plot(xhat(:,1),'red');
hold on;
plot(x(:,1));
function [xpred, Ppred] = predict(x0,P, A, Q)
xpred = A*x0;
Ppred = A*P*A' + Q;
end
function [nu, S] = innovation(xpred, Ppred, y, C, R)
nu = y - C*xpred; %% innovation
S = R + C*Ppred*C'; %% innovation covariance
end
function [xnew, Pnew] = innovation_update(xpred, Ppred, nu, S, C)
K = Ppred*C'*inv(S); %% Kalman gain
xnew = xpred + K*nu; %% new state
Pnew = Ppred - Ppred*K*C; %% new covariance
end
The code is throwing the error
Error using inv
Matrix must be square.
Error in innovation_update (line 2)
K = Ppred*C'*inv(S); %% Kalman gain
Error in Kalman_run (line 65)
[xnew, P] = innovation_update(xpred, Ppred, nu, S, C);
This is because S matrix is not square. How do I make it square? Is there a problem with the steps before that?
As far as I understand it, the R matrix is supposed to be the covariance matrix for the measurement noise.
The following lines:
R=sigma_2_v*diag(diag(x));
R = diag(R);
Change R from a 2x2 diagonal matrix to a 2x1 column vector.
Since your observation y is a scalar, the observation noise v must also be a scalar. This means R is a 1x1 covariance matrix, or simply the variance of the random variable v.
You should make R a scalar for your code to work properly.

Implementing a neural network to figure out its cost

Cost function
I am trying to code the above expression in Matlab. Unfortunately I seem to be getting a cost of 10.441460 instead of 0.287629 so I'm out by a factor of over 36!
As for each of the symbols:
m is the number of training examples. [a scalar number]
K is the number of output nodes. [a scalar number]
y is the vector of training outputs [an m by 1 vector]
y^{(i)}_{k} is the ith training output (target) for the kth output node.
[a scalar number]
x^{(i)} is the ith training input. [a column vector for all the input
nodes]
h_{\theta}(x^{(i)})_{k} is the value of the hypothesis at output k, with
weights theta, and training input i. [a scalar number]
note: h_{\theta}(x^{(i)}) will be a column vector with K rows.
My attempt for the cost function:
Theta1 = [ones(1,size(Theta1,2));Theta1];
X = [ones(m,1) , X]; %Add a column of 1's to X
R=zeros(m,1);
for i = 1:m
a = y(i) == [10 1:9];
R(i) = -(a*(log(sigmoid(Theta2*(sigmoid(Theta1*X(i,:)'))))) + (1-a)*(log(1-sigmoid(Theta2*(sigmoid(Theta1*X(i,:)'))))))/m;
end
J = sum(R);
This will probably be useful for reference:
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
% variable J. After implementing Part 1, you can verify that your
% cost function computation is correct by verifying the cost
% computed in ex4.m
%
% =========================================================================
end