I am trying to solve a second order differential using ODE45 in Matlab with matrix as inputs. I am struck with couple of errors that includes :
"In an assignment A(I) = B, the number of elements in B and
I must be the same."
Double order differential equations given below:
dy(1)= diag(ones(1,100) - 0.5*y(2))*Co;
dy(2)= -1 * Laplacian(y(1)) * y(2);
Main function call is:
[T,Y] = ode45(#rigid,[0.000 100.000],[Co Xo]);
Here, Co is Matrix of size 100x100 and Xo is a column matrix of size 100x1. Laplacian is a pre-defined function to compute matrix laplacian.
I will appreciate any help in this. Should I reshape input matrices and vectors to fall in same dimensions or something?
Your guess is correct. The MATLAB ode suite can solve only vector valued ode, i.e. an ode of the form y'=f(t,y). In your case you should convert y, and dy, back and forth between a matrix and an array by using reshape.
To be more precise, the initial condition will be transformed into the array
y0 = reshape([Co Xo], 100*101, 1);
while y will be obtained with
y_matrix = reshape(y, 100, 101);
y1 = y_matrix(:,1:100);
y2 = y_matrix(:,101);
After having computed the matrices dy1 and dy2 you will have to covert them in an array with
dy = reshape([dy1 dy2], 100*101, 1);
Aside from the limitations of ode45 your code gives that error because, in MATLAB, matrices are not indexed in that way. In fact, if you define A = magic(5), A(11) gives the eleventh element of A i.e. 1.
Related
I have a complex equation involving matrices:
R = expm(X)*A + (expm(X)-I)*inv(X)*B*U;
where R, B and U are known matrices.
I is an identity matrix.
I need to solve for X. Is there any way to solve this in MATLAB?
If your equation is nonlinear and you have access to MATLAB optimization toolbox you can use the fsolve function (You can still use it for a linear equation, but it may not be the most efficient approach). You just need to reformat your equation into the form F(x) = 0, where x is a vector or a matrix. For example, if X is a vector of length 2:
Define your function to solve:
function F = YourComplexEquation(X)
Fmatrix = expm(X)*A + (expm(X)-I)*inv(X)*B*U - R
% This last line is because I think fsolve requires F to be a vector, not a matrix
F = Fmatrix(:);
Then call fsolve with an initial guess:
X = fsolve(#YourComplexEquation,[0;0]);
I am trying to learn how to code for linear regression where the data statistics_data represents the yeast growth year in first column, the value of a chemical component in the second column and the value of the population in third column. Once theta is calculated using least squares formulation, I want to predict the value of the population using: pred_year = 2020;
pred_year_val = [1 2020]; which is giving this error:
Error using *
Inner matrix dimensions must agree.
Error in main_normal_equation (line 44)
pred_value = pred_year_val * theta;
Below is the code:
statistics_data = [2007, 9182927, 2;
2008,3,9256347;
2009,3.5,9340682;
2010,4,9415570;
2011,5,9482855;
2012,4.8,9555893;
2013,4.9,9644864;
2014,5,9747355;
2015,5,9851017;
2016,5,9995153;
2017,5,10120242;];
% Convert to independent variable matrix and response
X = (statistics_data(:,1:2));
y = (statistics_data(:,3));
% Convert matrix values to double
X = double(X);
y = double(y);
hold on;
% Set the x-axis label
xlabel('Year');
% Set the y-axis label
ylabel('Population');
% Plot population data
plot(X, y, 'rx', 'MarkerSize', 10);
m = length(y);
% Add ones column
X = [ones(m, 1) X];
% Normal Equation
theta = (pinv(X'*X))*X'*y
% Predict population for 2020
pred_year = 2020;
pred_year_val = [1 2020];
% Calculate predicted value
pred_value = pred_year_val * theta;
% Plot linear regression line
plot(X(:,2), X*theta, '-')
fprintf('Predicted population in 2020 is %d people\n ', int64(pred_value));
In matlab when you use the * operator, you are referencing a matrix multiply. Matrix multiplication has strict rules about the dimensions of the multiplied matrices.
Inspecting your code, it does not seem that your intent is to do a matrix multiply....
You can multiply a scalar by a matrix using * and scale each value in the matrix accordingly.
You can also vector multiply which is sometimes called element by element multiplication using the .* operator.
To resolve your issue you must clarify whether you intended to do a matrix multiply, scalar multiplication, or a vector multiplication. Then you must properly set your operands and operator to reflect what it is you aim to achieve.
It isn't clear to me exactly how the math in your code is supposed to be executed otherwise I could help show you where your operators and operands must be changed.
You could start by reviewing the documentation here: https://www.mathworks.com/help/matlab/matlab_prog/array-vs-matrix-operations.html
So pred_year_val has size [1 2] while theta has size [3 1]. Using the pigeon hole principle we can determine that the number of columns of pred_year_val is not equal to the number of rows of theta and therefore we cannot perform a matrix multiplication, i.e. the execution of
pred_value = pred_year_val * theta;
is bound to fail. So it seems like you need to add a value for the chemical component to pred_year_val.
The following is a function that takes two equal sized vectors X and Y, and is supposed to return a vector containing single correlation coefficients for image correspondence. The function is supposed to work similarly to the built in corr(X,Y) function in matlab if given two equal sized vectors. Right now my code is producing a vector containing multiple two-number vectors instead of a vector containing single numbers. How do I fix this?
function result = myCorr(X, Y)
meanX = mean(X);
meanY = mean(Y);
stdX = std(X);
stdY = std(Y);
for i = 1:1:length(X),
X(i) = (X(i) - meanX)/stdX;
Y(i) = (Y(i) - meanY)/stdY;
mult = X(i) * Y(i);
end
result = sum(mult)/(length(X)-1);
end
Edit: To clarify I want myCorr(X,Y) above to produce the same output at matlab's corr(X,Y) when given equal sized vectors of image intensity values.
Edit 2: Now the format of the output vector is correct, however the values are off by a lot.
I recommend you use r=corrcoef(X,Y) it will give you a normalized r value you are looking for in a 2x2 matrix and you can just return the r(2,1) entry as your answer. Doing this is equivalent to
r=(X-mean(X))*(Y-mean(Y))'/(sqrt(sum((X-mean(X)).^2))*sqrt(sum((Y-mean(Y)).^2)))
However, if you really want to do what you mentioned in the question you can also do
r=(X)*(Y)'/(sqrt(sum((X-mean(X)).^2))*sqrt(sum((Y-mean(Y)).^2)))
My overall goal is to use the MATLAB symbolic toolbox to simplify the process of formulating and solving for the sensitivities of solutions to ordinary differential equations with respect to the parameters in the equations. In my case I have an ODE with 2 states and 10 parameters. A smaller, but representative, example would look like
X = sym('X', [2 1]) % Vector representing state variables
p = sym('p', [3 1]) % Vector representing parameters
% Fitzhugh Nagumo Equations
rhs_1 = symfun(p(3)*(X(1) - X(1)^3/3 + X(2)), [X; p])
rhs_2 = symfun(-(X(1) - p(1) + p(2)*X(2))/p(3), [X; p])
I can then get the partial derivatives, which are used to solve for the sensitivities, of the RHS of the ODE wrt to the parameters using a command like 'gradient(rhs_1, p)'. But then I would like to convert this gradient to a matlab function that is a function of the vectors X and p, not a function of the elements of these vectors. I need these functions to be of this form because otherwise I cannot use the CVODES solver in the sundialsTB toolbox. Is this possible? Is there an easier way to accomplish what I am trying to do?
Recognizing that a comma-separated list of function inputs is really just a cell array, you can do this by converting your vector inputs to a cell arrays of scalar using mat2cell:
x=1:2;
p=1:3;
v = mat2cell([x(:);p(:)],ones(numel(x)+numel(p),1),1);
y1 = rhs_1(v{:})
y2 = rhs_2(v{:})
I'm trying to write a program that gets a matrix A of any size, and SVD decomposes it:
A = U * S * V'
Where A is the matrix the user enters, U is an orthogonal matrix composes of the eigenvectors of A * A', S is a diagonal matrix of the singular values, and V is an orthogonal matrix of the eigenvectors of A' * A.
Problem is: the MATLAB function eig sometimes returns the wrong eigenvectors.
This is my code:
function [U,S,V]=badsvd(A)
W=A*A';
[U,S]=eig(W);
max=0;
for i=1:size(W,1) %%sort
for j=i:size(W,1)
if(S(j,j)>max)
max=S(j,j);
temp_index=j;
end
end
max=0;
temp=S(temp_index,temp_index);
S(temp_index,temp_index)=S(i,i);
S(i,i)=temp;
temp=U(:,temp_index);
U(:,temp_index)=U(:,i);
U(:,i)=temp;
end
W=A'*A;
[V,s]=eig(W);
max=0;
for i=1:size(W,1) %%sort
for j=i:size(W,1)
if(s(j,j)>max)
max=s(j,j);
temp_index=j;
end
end
max=0;
temp=s(temp_index,temp_index);
s(temp_index,temp_index)=s(i,i);
s(i,i)=temp;
temp=V(:,temp_index);
V(:,temp_index)=V(:,i);
V(:,i)=temp;
end
s=sqrt(s);
end
My code returns the correct s matrix, and also "nearly" correct U and V matrices. But some of the columns are multiplied by -1. obviously if t is an eigenvector, then also -t is an eigenvector, but with the signs inverted (for some of the columns, not all) I don't get A = U * S * V'.
Is there any way to fix this?
Example: for the matrix A=[1,2;3,4] my function returns:
U=[0.4046,-0.9145;0.9145,0.4046]
and the built-in MATLAB svd function returns:
u=[-0.4046,-0.9145;-0.9145,0.4046]
Note that eigenvectors are not unique. Multiplying by any constant, including -1 (which simply changes the sign), gives another valid eigenvector. This is clear given the definition of an eigenvector:
A·v = λ·v
MATLAB chooses to normalize the eigenvectors to have a norm of 1.0, the sign is arbitrary:
For eig(A), the eigenvectors are scaled so that the norm of each is 1.0.
For eig(A,B), eig(A,'nobalance'), and eig(A,B,flag), the eigenvectors are not normalized
Now as you know, SVD and eigendecomposition are related. Below is some code to test this fact. Note that svd and eig return results in different order (one sorted high to low, the other in reverse):
% some random matrix
A = rand(5);
% singular value decomposition
[U,S,V] = svd(A);
% eigenvectors of A'*A are the same as the right-singular vectors
[V2,D2] = eig(A'*A);
[D2,ord] = sort(diag(D2), 'descend');
S2 = diag(sqrt(D2));
V2 = V2(:,ord);
% eigenvectors of A*A' are the same as the left-singular vectors
[U2,D2] = eig(A*A');
[D2,ord] = sort(diag(D2), 'descend');
S3 = diag(sqrt(D2));
U2 = U2(:,ord);
% check results
A
U*S*V'
U2*S2*V2'
I get very similar results (ignoring minor floating-point errors):
>> norm(A - U*S*V')
ans =
7.5771e-16
>> norm(A - U2*S2*V2')
ans =
3.2841e-14
EDIT:
To get consistent results, one usually adopts a convention of requiring that the first element in each eigenvector be of a certain sign. That way if you get an eigenvector that does not follow this rule, you multiply it by -1 to flip the sign...