Transformation function for matrix in matlab - matlab

I have 2 matrices in Matlab, A and B, I am trying to find an easy way to take these in and output a function that maps A to B, it should be as easy as a function in the form of B=Ax+y where x and y are static numbers, but I cannot seem to remember my basic math skills today. Is there an easy way of doing this in Matlab?

Edit
This is the answer to the OP's original question as explained in the comments.
Take two elements b1 and b2 from B and the same elements a1 and a2 from A. Be sure that a1 ~= a2. If all elements of A are the same then the problem is trivial. Then compute
x = (b1-b2) / (a1-a2) ;
y = b1 - a1*x;
err = B - A*x - y;
total_error = sum(abs(err(:)));
If x and y do not satisfy the equation, then total_error > 0 and there is no such x and y.
Actually, if x and y are just numbers, you can just do
B = A*x + y;
Matlab is capable of doing matrix times scalar arithmetic by broadcasting the number x to each element of A.
If x is a vector and A*x makes sense, you can do the same.
If y is a scalar or a vector of the same size as A*x, you can also do the same.

Related

How to numerically solve a system with two matrices in Matlab?

I'm trying to numerically find the solution to A*cos x +B*sin x = C where A and B are two known square matrices of the same size (for example 100x100), and C is a known vector (100x1).
Without the second term (i.e. with a single matrix), I will use Jacobi or Gauss-Seidel to solve this problem and get x but here, I don't see how to proceed to solve this problem in Matlab.
May be, it would be useful to solve the problem as : A*X + B*sqrt(1-X^2) = C.
I would greatly appreciate any help, ideas or advices
Thanks in advance
If I understood you correctly, you could use fsolve like this (c and X are vectors):
A = ones(2,2);
B = ones(2,2);
c = ones(2,1);
% initial point
x0 = ones(length(A), 1);
% call to fsolve
sol = fsolve(#(x) A * cos(x) + B*sin(x) - c, x0);
Here, we solve the nonlinear equation system F(x) = 0 with F: R^N -> R^N and F(x) = A * cos(x) + B*sin(x) - c.
Only for the sake of completeness, here's my previous answer, i.e. how one could do it in case C and X are matrices instead of vectors:
A = ones(2,2);
B = ones(2,2);
C = ones(2,2);
% initial point
x0 = ones(numel(A), 1);
% call to fsolve
fsolve(#(x) fun(x, A, B, C), x0)
function [y] = fun(x, A, B, C)
% Transform the input vector x into a matrix
X = reshape(x, size(A));
% Evaluate the matrix equation
Y = A * cos(X) + B*sin(X) - C;
% flatten the matrix Y to a row vector y
y = reshape(Y, [], 1);
end
Here, the idea is to transform the matrix equation system F: R^(N x N) -> R^(N x N) into a equivalent nonlinear system F: R^(N*N) -> R^(N*N).

How to solve a differential equation with non-constant coefficient?

I have an equation like this:
dy/dx = a(x)*y + b
where a(x) is a non-constant (a=1/x) and b is a vector (10000 rows).
How can I solve this equation?
Let me assume you would like to write a generic numerical solver for dy/dx = a(x)*y + b. Then you can pass the function a(x) as an argument to the right-hand side function of one of the ODE solvers. e.g.
a = #(x) 1/x;
xdomain = [1 10];
b = rand(10000,1);
y0 = ones(10000,1);
[x,y] = ode45(#(x,y,a,b)a(x)*y + b,xdomain,y0,[],a,b);
plot(x,y)
Here, I've specified the domain of x as xdomain, and the value of y at the bottom limit of x as y0.
From my comments, you can solve this without MATLAB. Assuming non-zero x, you can use an integrating factor to get a 10000-by-1 solution y(x)
y_i(x) = b_i*x*ln(x) + c_i*x
with 10000-by-1 vector of constants c, where y_i(x), b_i and c_i are the i-th entries of y(x), b and c respectively. The constant vector c can be determined at some point x0 as
c_i = y_i(x0)/x_0 - b_i*ln(x0)

Monte Carlo integration of exp(-x^2/2) from x=-infinity to x=+infinity

I want to integrate
f(x) = exp(-x^2/2)
from x=-infinity to x=+infinity
by using the Monte Carlo method. I use the function randn() to generate all x_i for the function f(x_i) = exp(-x_i^2/2) I want to integrate to calculate afterwards the mean value of f([x_1,..x_n]). My problem is, that the result depends on what values I choose for my borders x1 and x2 (see below). My result is going far away from the real value by increasing the value of x1 and x2. Actually the result should be better and better by increasing x1 and x2.
Does someone see my mistake?
Here is my Matlab code
clear all;
b=10; % border
x1 = -b; % left border
x2 = b; % right border
n = 10^6; % number of random numbers
x = randn(n,1);
f = ones(n,1);
g = exp(-(x.^2)/2);
F = ((x2-x1)/n)*f'*g;
The right value should be ~2.5066.
Thanks
Try this:
clear all;
b=10; % border
x1 = -b; % left border
x2 = b; % right border
n = 10^6; % number of random numbers
x = sort(abs(x1 - x2) * rand(n,1) + x1);
f = exp(-x.^2/2);
F = trapz(x,f)
F =
2.5066
Ok, lets start with writing of general case of MC integration:
I = S f(x) * p(x) dx, x in [a...b]
S here is integral sign.
Usually, p(x) is normalized probability density function, f(x) you want to integrate, and algorithm is very simple one:
set accumulator s to zero
start loop of N events
sample x randomly from p(x)
given x, compute f(x) and add to accumulator
back to start loop if not done
if done, divide accumulator by N and return it
In simplest textbook case you have
I = S f(x) dx, x in [a...b]
where it means PDF is equal to uniformly distributed one
p(x) = 1/(b-a)
but what you have to sum is actually (b-a)*f(x), because your integral now looks like
I = S (b-a)*f(x) 1/(b-a) dx, x in [a...b]
In general, if both f(x) and p(x) could serve as PDF, then it is matter of choice whether you integrate f(x) over p(x), or p(x) over f(x). No difference! (Well, except maybe computation time)
So, back to particular integral (which is equal to \sqrt{2\pi}, i believe)
I = S exp(-x^2/2) dx, x in [-infinity...infinity]
You could use more traditional approach like #Agriculturist and write it
I = S exp(-x^2/2)*(2a) 1/(2a) dx, x in [-a...a]
and sample x from U(0,1) in [-a...a] interval, and for each x compute exp() and average it and get the result
From what I understand, you want to use exp() as PDF, so your integral looks like
I = S D * exp(-x^2/2)/D dx, x in [-infinity...infinity]
PDF to be normalized so it shall include normalization factor D, which is exactly equal to \sqrt{2 \pi} from gaussian integral.
Now f(x) is just a constant equal to D. It doesn't depend on x. It means that you for each sampled x should add to accumulator a CONSTANT value of D. After running N samples,
in accumulator you'll have exactly N*D. To find mean you'll divide by N and as a result you'll get perfect D, which is \sqrt{2 \pi}, which, in turn, is
2.5066.
Too rusty to write any matlab, and Happy New Year anyway

Using matlabs regress like polyfit

I have:
x = [1970:1:2000]
y = [data]
size(x) = [30,1]
size(y) = [30,1]
I want:
% Yl = kx + m, where
[k,m] = polyfit(x,y,1)
For some reason i have to use "regress" for this.
Using k = regress(x,y) gives some totally random value that i have no idea where it comes from. How do it?
The number of outputs you get in "k" is dependant on the size of input X, so you will not get both m and k just by putting in your x and y straight. From the docs:
b = regress(y,X) returns a p-by-1 vector b of coefficient estimates for a multilinear regression of the responses in y on the predictors in X. X is an n-by-p matrix of p predictors at each of n observations. y is an n-by-1 vector of observed responses.
It is not exactly stated, but the example in the help docs using the carsmall inbuilt dataset shows you how to set this up. For your case, you'd want:
X = [ones(size(x)) x]; % make sure this is 30 x 2
b = regress(y,X); % y should be 30 x 1, b should be 2 x 1
b(1) should then be your m, and b(2) your k.
regress can also provide additional outputs, such as confidence intervals, residuals, statistics such as r-squared, etc. The input remains the same, you'd just change the outputs:
[b,bint,r,rint,stats] = regress(y,X);

Solve matrix equation in matlab

I have an equation of the type c = Ax + By where c, x and y are vectors of dimensions say 50,000 X 1, and A and B are matrices with dimensions 50,000 X 50,000.
Is there any way in Matlab to find matrices A and B when c, x and y are known?
I have about 100,000 samples of c, x, and y. A and B remain the same for all.
Let X be the collection of all 100,000 xs you got (such that the i-th column of X equals the x_i-th vector).
In the same manner we can define Y and C as 2D collections of ys and cs respectively.
What you wish to solve is for A and B such that
C = AX + BY
You have 2 * 50,000^2 unknowns (all entries of A and B) and numel(C) equations.
So, if the number of data vectors you have is 100,000 you have a single solution (up to linearly dependent samples). If you have more than 100,000 samples you may seek for a least-squares solution.
Re-writing:
C = [A B] * [X ; Y] ==> [X' Y'] * [A';B'] = C'
So, I suppose
[A' ; B'] = pinv( [X' Y'] ) * C'
In matlab:
ABt = pinv( [X' Y'] ) * C';
A = ABt(1:50000,:)';
B = ABt(50001:end,:)';
Correct me if I'm wrong...
EDIT:
It seems like there is quite a fuss around dimensionality here. So, I'll try and make it as clear as possible.
Model: There are two (unknown) matrices A and B, each of size 50,000x50,000 (total 5e9 unknowns).
An observation is a triplet of vectors: (x,y,c) each such vector has 50,000 elements (total of 150,000 observed points at each sample). The underlying model assumption is that an observation is generated by c = Ax + By in this model.
The task: given n observations (that is n triplets of vectors { (x_i, y_i, c_i) }_i=1..n) the task is to uncover A and B.
Now, each sample (x_i,y_i,c_i) induces 50,000 equations of the form c_i = Ax_i + By_i in the unknown A and B. If the number of samples n is greater than 100,000, then there are more than 50,000 * 100,000 ( > 5e9 ) equations and the system is over constraint.
To write the system in a matrix form I proposed to stack all observations into matrices:
A matrix X of size 50,000 x n with its i-th column equals to observed x_i
A matrix Y of size 50,000 x n with its i-th column equals to observed y_i
A matrix C of size 50,000 x n with its i-th column equals to observed c_i
With these matrices we can write the model as:
C = A*X + B*Y
I hope this clears things up a bit.
Thank you #Dan and #woodchips for your interest and enlightening comments.
EDIT (2):
Submitting the following code to octave. In this example instead of 50,000 dimension I work with only 2, instead of n=100,000 observations I settled for n=100:
n = 100;
A = rand(2,2);
B = rand(2,2);
X = rand(2,n);
Y = rand(2,n);
C = A*X + B*Y + .001*randn(size(X)); % adding noise to observations
ABt = pinv( [ X' Y'] ) * C';
Checking the difference between ground truth model (A and B) and recovered ABt:
ABt - [A' ; B']
Yields
ans =
5.8457e-05 3.0483e-04
1.1023e-04 6.1842e-05
-1.2277e-04 -3.2866e-04
-3.1930e-05 -5.2149e-05
Which is close enough to zero. (remember, the observations were noisy and solution is a least-square one).