I have a simple function below (I omitted the allocations, etc. for brevity) that I have been tryig to plot against it's x value for specific values of N and T but I keep getting a dimensions error. I think that when I try to plot this I am defining an array for x and then plotting Psum(N', x, T') for certain N' and T' against these x, however MATLAB doesn't seem to like this. Can someone give me some direction please.
function U = Psum(N, X, T)
for m = 1:N
A(1,m) = (1/(m*pi))*sin(m*pi*X)*T*exp(-(m^2)*(pi^2)*T);
% array terms of partial sum
end
M = -sum(A); % evaluate Nth partial sum
U = T*(1-X) + M; % output U(X,T) = T(1-X) + V(X,T)
end
I'm getting a similar error when I try to plot the following, I think there must be something wrong with my general approach
syms x;
f = #(x)((x/(100*pi))*(exp(-(100^2)*(pi^2)*x)));
x = 0:0.1:10000;
plot(x,f(x),'r')
title('PartialSum convergence');
xlabel('T');
ylabel('a_n');
the error I get here reads:
Error using *
Inner matrix dimensions must agree.
Here's the analysis of why you're getting a dimension mismatch error. From this line:
A(1,m) = (1/(m*pi))*sin(m*pi*X)*T*exp(-(m^2)*(pi^2)*T)
The element A(1, m) is supposed to be a scalar value in a two-dimensional matrix. Now let's see what are the dimensions of each of the multiplicands:
(1/(m*pi)) is a scalar (that is, a 1×1 matrix).
sin(m*pi*X) has the same dimensions as X. Let's assume its dimensions are q×n.
exp(-(m^2)*(pi^2)*T) has the same dimensions as T, and is multiplied by T.
Therefore T must be a square matrix, so let's assume its dimensions are p×p.
What we get is a q×n matrix multiplied by a square p×p matrix, and the result must be a scalar (that is, 1×1 matrix). This forces q=1 and n=p.
Now let's look at this line:
U = T*(1-X) + M
We are forced to conclude that p=1, otherwise T cannot be multiplied by X from the right.
This means that your code forces T and X to be scalar! No wonder you're getting a error :)
The remedy is simple: revise the computation in Psum so that it can produce correct results for both a scalar X and a vector X. A possible fix would be adding another loop to iterate over all values of X:
function U = Psum(N, X, T)
U = zeros(size(X));
for k = 1:numel(X) %// Iterate over all values of X
for m = 1:N
A(1,m) = (1/(m*pi))*sin(m*pi*X(k))*T*exp(-(m^2)*(pi^2)*T);
%// array terms of partial sum
end
M = -sum(A); % evaluate Nth partial sum
U(k) = T*(1-X(k)) + M; % output U(X,T) = T(1-X) + V(X,T)
end
end
The output of this function has the same dimensions as X.
By the way, did you verify that Psum produces that correct result for scalar inputs?
I don't fully understand what you are trying to accomplish, but just an observation for you: if your input X is a vector, line 3 can not be computed correctly
A(1,m) = (1/(m*pi))*sin(m*pi*X)*T*exp(-(m^2)*(pi^2)*T);
because the right hand side of the equation give you a vector, but the right hand side A(1,m) is one element, not vector. so you have dimension mismatch.
Hope this helps!
Related
Knowing that X is a mx3 matrix and theta is a 3x1 matrix, I calculated the cost function of logistic regression as follows:
h = sigmoid(theta'*X');
J = ((-y)*log(h)-(1-y)*log(1-h))/m;
grad(1) = (h'-y)'*X(:,1);
grad(2) = (h'-y)'*X(:,2);
grad(3) = (h'-y)'*X(:,3);
The output is the picture attached:
That's explicitly not the correct result.
When I do
h = sigmoid(X*theta);
J = ((-y)'*log(h)-(1-y)'*log(1-h))/m;
grad = (X'*(h - y))/m;
I get the right result:
For me, these two codes are the same - and yes, I checked the matrices sizes in the first code.
Could somebody help me understand while one is giving one input and the other a different output? Somehow, the first code is giving lots of cost at theta values...
This is because you're not paying attention to the dimensionality of your inputs and outputs. (which in turn is because your code is not properly commented/structured.). Assuming y has the same orientation as X in terms of observations, then:
In the first case you have:
h = sigmoid(theta'*X'); # h is a 1xm horizontal vector
J = ((-y)*log(h)-(1-y)*log(1-h))/m; # J is an mxm matrix
In the second case you have:
h = sigmoid(X*theta); # h is an mx1 vector
J = ((-y)'*log(h)-(1-y)'*log(1-h))/m; # J is a 1xm horizontal vector
This is also the reason you get multiple printouts of that "Cost at test theta" printout. My guess is you're calling "sum" somewhere down the line, to sum over m observations, but because J was an mxm matrix instead of a vector, you ended up with a vector in an fprintf statement, which has the effect of printing that statement as many times as there are elements in your vector. Is m=12 by any chance?
The following is a function that takes two equal sized vectors X and Y, and is supposed to return a vector containing single correlation coefficients for image correspondence. The function is supposed to work similarly to the built in corr(X,Y) function in matlab if given two equal sized vectors. Right now my code is producing a vector containing multiple two-number vectors instead of a vector containing single numbers. How do I fix this?
function result = myCorr(X, Y)
meanX = mean(X);
meanY = mean(Y);
stdX = std(X);
stdY = std(Y);
for i = 1:1:length(X),
X(i) = (X(i) - meanX)/stdX;
Y(i) = (Y(i) - meanY)/stdY;
mult = X(i) * Y(i);
end
result = sum(mult)/(length(X)-1);
end
Edit: To clarify I want myCorr(X,Y) above to produce the same output at matlab's corr(X,Y) when given equal sized vectors of image intensity values.
Edit 2: Now the format of the output vector is correct, however the values are off by a lot.
I recommend you use r=corrcoef(X,Y) it will give you a normalized r value you are looking for in a 2x2 matrix and you can just return the r(2,1) entry as your answer. Doing this is equivalent to
r=(X-mean(X))*(Y-mean(Y))'/(sqrt(sum((X-mean(X)).^2))*sqrt(sum((Y-mean(Y)).^2)))
However, if you really want to do what you mentioned in the question you can also do
r=(X)*(Y)'/(sqrt(sum((X-mean(X)).^2))*sqrt(sum((Y-mean(Y)).^2)))
I would like to numerically integrate a vector which represents a function f(x) over the range of x specified by bounds x0 and x1 in Matlab. I would like to check that the output of the integration is correct and that it converges.
There are the quad and quadl functions that serve well in identifying the required error tolerance, but they need the input argument to be a function and not the resulting vector of the function. There is also the trapz function where we can enter the two vectors x and f(x), but then it computes the integral of f(x) with respect to x depending on the spacing used by vector x. However, there is no given way using trapz to adjust the tolerance as in quad and quadl and make sure the answer is converging.
The main problem why I can't use quad and quadl functions is that f(x) is the following equation:
f(x) = sum(exp(-1/2 *(x-y))), the summation is over y, where y is a vector of length n and x is an element that is given each time to the function f(x). Therefore, all elements in vector y are subtracted from element x and then the summation over y is calculated to give us the value f(x). This is done for m values of x, where m is not equal to n.
When I use quadl as explained in the Matlab manual, where f(x) is defined in a separate function .m file and then in the main calling file, I use Q = quadl(#f,x0,x1,tolerance,X,Y); here X is a vector of length m and Y is a vector of length L. Matlab gives an error: "??? Error using ==> minus
Matrix dimensions must agree." at the line where I define the function f(x) in the .m function file. f(x) = sum(exp(-1/2 *(x-y)))
I assume the problem is that Matlab treats x and y as vectors that should be of the same length when they are subtracted from each other, whereas what's needed is to subtract the vector Y each time from a single element from the vector X.
Would you please recommend a way to solve this problem and successfully numerically integrate f(x) versus x with a method to control the tolerance?
From the documentationon quad it says:
The function y = fun(x) should accept a vector argument x and return a vector result y, the integrand evaluated at each element of x.
So every time we call the function, we need to evaluate the integrand at each given x.
Also, to parameterize the function call with the constant vector Y, I recommend an anonymous function call. There's a reasonable demo here. Here's how I implemented your problem in Matlab:
function Q = test_num_int(x0,x1,Y)
Q = quad(#(x) myFun(x,Y),x0,x1);
end
function fx = myFun(x,Y)
fy = zeros(size(Y));
fx = zeros(size(x));
for jj=1:length(fx)
for ii=1:length(Y)
fy(ii) = exp(-1/2 *(x(jj)-Y(ii)));
end
fx(jj) = sum(fy);
end
end
Then I called the function and got the following output:
Y = 0:0.1:1;
x0 = 0;
x1 = 1;
Q = test_num_int(x0,x1,Y)
Q =
11.2544
The inputs for the lower and upper bound and the constant array are obviously just dummy values, but the integral converges very quickly, almost immediately. Hope this helps!
I believe the following would also work:
y = randn(10,1);
func = #(x) sum(exp(-1/2 *(x-y)));
integral(func,0,1,'ArrayValued',true)
I have a 2x2 matrix, each element of which is a 1x5 vector. something like this:
x = 1:5;
A = [ x x.^2; x.^2 x];
Now I want to find the determinant, but this happens
B = det(A);
Error using det
Matrix must be square.
Now I can see why this happens, MATLAB sees A as a 2x10 matrix of doubles. I want to be able to treat x as an element, not a vector. What I'd like is det(A) = x^2 - x^4, then get B = det(A) as a 1x5 vector.
How do I achieve this?
While Matlab has symbolic facilities, they aren't great. Instead, you really want to vectorize your operation. This can be done in a loop, or you can use ARRAYFUN for the job. It sounds like ARRAYFUN would probably be easier for your problem.
The ARRAYFUN approach:
x = 1:5;
detFunc = #(x) det([ x x^2 ; x^2 x ]);
xDet = arrayfun(detFunc, x)
Which produces:
>> xDet = arrayfun(detFunc, x)
xDet =
0 -12 -72 -240 -600
For a more complex determinant, like your 4x4 case, I would create a separate M-file for the actual function (instead of an anonymous function as I did above), and pass it to ARRAYFUN using a function handle:
xDet = arrayfun(#mFileFunc, x);
Well mathematically a Determinant is only defined for a square matrix. So unless you can provide a square matrix you're not going to be able to use the determinant.
Note I know wikipedia isn't the end all resource. I'm simply providing it as I can't readily provide a print out from my college calculus book.
Update: Possible solution?
x = zeros(2,2,5);
x(1,1,:) = 1:5;
x(1,2,:) = 5:-1:1;
x(2,1,:) = 5:-1:1;
x(2,2,:) = 1:5;
for(n=1:5)
B(n) = det(x(:,:,n));
end
Would something like that work, or are you looking to account for each vector at the same time? This method treats each 'layer' as it's own, but I have a sneaky suspiscion that you're wanting to get a single value as a result.
I am working towards comparing multiple images. I have these image data as column vectors of a matrix called "images." I want to assess the similarity of images by first computing their Eucledian distance. I then want to create a matrix over which I can execute multiple random walks. Right now, my code is as follows:
% clear
% clc
% close all
%
% load tea.mat;
images = Input.X;
M = zeros(size(images, 2), size (images, 2));
for i = 1:size(images, 2)
for j = 1:size(images, 2)
normImageTemp = sqrt((sum((images(:, i) - images(:, j))./256).^2));
%Need to accurately select the value of gamma_i
gamma_i = 1/10;
M(i, j) = exp(-gamma_i.*normImageTemp);
end
end
My matrix M however, ends up having a value of 1 along its main diagonal and zeros elsewhere. I'm expecting "large" values for the first few elements of each row and "small" values for elements with column index > 4. Could someone please explain what is wrong? Any advice is appreciated.
Since you're trying to compute a Euclidean distance, it looks like you have an error in where your parentheses are placed when you compute normImageTemp. You have this:
normImageTemp = sqrt((sum((...)./256).^2));
%# ^--- Note that this parenthesis...
But you actually want to do this:
normImageTemp = sqrt(sum(((...)./256).^2));
%# ^--- ...should be here
In other words, you need to perform the element-wise squaring, then the summation, then the square root. What you are doing now is summing elements first, then squaring and taking the square root of the summation, which essentially cancel each other out (or are actually the equivalent of just taking the absolute value).
Incidentally, you can actually use the function NORM to perform this operation for you, like so:
normImageTemp = norm((images(:, i) - images(:, j))./256);
The results you're getting seem reasonable. Recall the behavior of the exp(-x). When x is zero, exp(-x) is 1. When x is large exp(-x) is zero.
Perhaps if you make M(i,j) = normImageTemp; you'd see what you expect to see.
Consider this solution:
I = Input.X;
D = squareform( pdist(I') ); %'# euclidean distance between columns of I
M = exp(-(1/10) * D); %# similarity matrix between columns of I
PDIST and SQUAREFORM are functions from the Statistics Toolbox.
Otherwise consider this equivalent vectorized code (using only built-in functions):
%# we know that: ||u-v||^2 = ||u||^2 + ||v||^2 - 2*u.v
X = sum(I.^2,1);
D = real( sqrt(bsxfun(#plus,X,X')-2*(I'*I)) );
M = exp(-(1/10) * D);
As was explained in the other answers, D is the distance matrix, while exp(-D) is the similarity matrix (which is why you get ones on the diagonal)
there is an already implemented function pdist, if you have a matrix A, you can directly do
Sim= squareform(pdist(A))