I have a problem when calculate discrete Fourier transform in MATLAB, apparently get the right result but when plot the amplitude of the frequencies obtained you can see values very close to zero which should be exactly zero. I use my own implementation:
function [y] = Discrete_Fourier_Transform(x)
N=length(x);
y=zeros(1,N);
for k = 1:N
for n = 1:N
y(k) = y(k) + x(n)*exp( -1j*2*pi*(n-1)*(k-1)/N );
end;
end;
end
I know it's better to use fft of MATLAB, but I need to use my own implementation as it is for college.
The code I used to generate the square wave:
x = [ones(1,8), -ones(1,8)];
for i=1:63
x = [x, ones(1,8), -ones(1,8)];
end
MATLAB version: R2013a(8.1.0.604) 64 bits
I have tried everything that has happened to me but I do not have much experience using MATLAB and I have not found information relevant to this issue in forums. I hope someone can help me.
Thanks in advance.
This will be a numerical problem. The values are in the range of 1e-15, while the DFT of your signal has values in the range of 1e+02. Most likely this won't lead to any errors when doing further processing. You can calculate the total squared error between your DFT and the MATLAB fft function by
y = fft(x);
yh = Discrete_Fourier_Transform(x);
sum(abs(yh - y).^2)
ans =
3.1327e-20
which is basically zero. I would therefore conclude: your DFT function works just fine.
Just one small remark: You can easily vectorize the DFT.
n = 0:1:N-1;
k = 0:1:N-1;
y = exp(-1j*2*pi/N * n'*k) * x(:);
With n'*k you create a matrix with all combinations of n and k. You then take the exp(...) of each of those matrix elements. With x(:) you make sure x is a column vector, so you can do the matrix multiplication (...)*x which automatically sums over all k's. Actually, I just notice, this is exactly the well-known matrix form of the DFT.
Related
I am following a machine learning course on Coursera and I am doing the following exercise using Octave (MatLab should be the same).
The exercise is related to the calculation of the cost function for a gradient descent algoritm.
In the course slide I have that this is the cost function that I have to implement using Octave:
This is the formula from the course slide:
So J is a function of some THETA variables represented by the THETA matrix (in the previous second equation).
This is the correct MatLab\Octave implementation for the J(THETA) computation:
function J = computeCost(X, y, theta)
%COMPUTECOST Compute cost for linear regression
% J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
% parameter for linear regression to fit the data points in X and y
% Initialize some useful values
m = length(y); % number of training examples
% You need to return the following variables correctly
J = 0;
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta
% You should set J to the cost.
J = (1/(2*m))*sum(((X*theta) - y).^2)
% =========================================================================
end
where:
X is a 2 column matrix of m rows having all the elements of the first column set to the value 1:
X =
1.0000 6.1101
1.0000 5.5277
1.0000 8.5186
...... ......
...... ......
...... ......
y is a vector of m elements (as X):
y =
17.59200
9.13020
13.66200
........
........
........
Finnally theta is a 2 columns vector having 0 asvalues like this:
theta = zeros(2, 1); % initialize fitting parameters
theta
theta =
0
0
Ok, coming back to my working solution:
J = (1/(2*m))*sum(((X*theta) - y).^2)
specifically to this matrix multiplication (multiplication between the matrix X and the vector theta): I know that it is a valid matrix multiplication because the number of column of X (2 columns) is equal to the number of rows of theta (2 rows) so it is a perfectly valid matrix multiplication.
My doubt that is driving me crazy (probably it is a trivial doubt) is related to the previous course slide context:
As you can see in the second equation used to calculated the current h_theta(x) value it is using the transposed theta vector and not the theta vector as done in the code.
Why ?!?!
I suspect that it depends only on how was created the theta vector. It was build in this way:
theta = zeros(2, 1); % initialize fitting parameters
that is generating a 2 line 1 column vector instead of a classic one line 2 column vector. So maybe I have not to transpose it. But I am absolutely not sure about this assertion.
Is my intuition correct or what am I missing?
Your intuition is correct. Effectively it does not matter whether you perform the multiplication as theta.' * X or as X.' * theta, since this either generates a horizontal vector or a vertical vector of the hypothesis representing all observations, and what you're expected to do next is subtract the y vector from the hypothesis vector at each observation, and sum the results. So as long as y has the same orientation as your hypothesis and you subtract at each equivalent point, then the scalar end-result of the summation will be the same.
Often enough, you'll see the X.' * theta version preferred over theta.' * X purely for convenience, to avoid transposing over and over again just to be consistent with the mathematical notation. But this is fine, since the underlying math doesn't really change, only the order of equivalent operations.
I agree it's confusing though, both because it makes it harder to follow the formula when the code effectively looks like it's doing something else, and also since it messes with the usual convention that a vertical vector represents 'coordinates', and a horizontal vector represents observations. In such cases, especially in languages like matlab / octave where the orientation of a vector isn't explicitly defined in the variable's type, it is doubly important to document what you expect the inputs to represent, and preferably there should have been assert statements in the code confirming the input has been passed in the correct orientation. Clearly here they felt it wasn't necessary because this code is acting under controlled conditions in a predefined exercise environment anyway, but it would have been good practice to do so from a software engineering point of view.
I need to solve the linear system
A x = b
which can be done efficiently by
x = A \ b
But now A is very large and I actually only need one component, say x(1). Is there a way to solve this more efficiently than to compute all components of x?
A is not sparse. Here, efficiency is actually an issue because this is done for many b.
Also, storing the inverse of K and multiplying only its first row to b is not possible because K is badly conditioned. Using the \ operator employs the LDL solver in this case, and accuracy is lost when the inverse is explicitly used.
I don't think you'd technically get a speed-up over the very optimized Matlab routine however if you understand how it is solved then you can just solve for one part of x. E.g the following. in traditional solver you use backsub for QR solve for instance. In LU solve you use both back sub and front sub. I could get LU. Unfortunately, it actually starts at the end due to how it solves it. The same is true for LDL which would employ both. That doesn't preclude that fact there may be more efficient ways of solving whatever you have.
function [Q,R] = qrcgs(A)
%Classical Gram Schmidt for an m x n matrix
[m,n] = size(A);
% Generates the Q, R matrices
Q = zeros(m,n);
R = zeros(n,n);
for k = 1:n
% Assign the vector for normalization
w = A(:,k);
for j=1:k-1
% Gets R entries
R(j,k) = Q(:,j)'*w;
end
for j = 1:k-1
% Subtracts off orthogonal projections
w = w-R(j,k)*Q(:,j);
end
% Normalize
R(k,k) = norm(w);
Q(:,k) = w./R(k,k);
end
end
function x = backsub(R,b)
% Backsub for upper triangular matrix.
[m,n] = size(R);
p = min(m,n);
x = zeros(n,1);
for i=p:-1:1
% Look from bottom, assign to vector
r = b(i);
for j=(i+1):p
% Subtract off the difference
r = r-R(i,j)*x(j);
end
x(i) = r/R(i,i);
end
end
The method mldivide, generally represented as \ accepts solving many systems with the same A at once.
x = A\[b1 b2 b3 b4] # where bi are vectors with n rows
Solves the system for each b, and will return an nx4 matrix, where each column is the solution of each b. Calling mldivide like this should improve efficiency becaus the descomposition is only done once.
As in many decompositions like LU od LDL' (and in the one you are interested in particular) the matrix multiplying x is upper diagonal, the first value to be solved is x(n). However, having to do the LDL' decomposition, a simple backwards substitution algorithm won't be the bottleneck of the code. Therefore, the decomposition can be saved in order to avoid repeating the calculation for every bi. Thus, the code would look similar to this:
[LA,DA] = ldl(A);
DA = sparse(DA);
% LA = sparse(LA); %LA can also be converted to sparse matrix
% loop over bi
xi = LA'\(DA\(LA\bi));
% end loop
As you can see in the documentation of mldivide (Algorithms section), it performs some checks on the input matrixes, and having defined LA as full and DA as sparse, it should directly go for a triangular solver and a tridiagonal solver. If LA was converted to sparse, it would use a triangular solver too, and I don't know if the conversion to sparse would represent any improvement.
I was asked to do circular convolution between two functions by sampling them, using the functions cconv. A known result of this sort of convolution is: CCONV( sin(x), sin(x) ) == -pi*cos(x)
To test the above I did:
w = linspace(0,2*pi,1000);
l = linspace(0,2*pi,1999);
stem(l,cconv(sin(w),sin(w))
but the result I got was:
which is absolutely not -pi*cos(x).
Can anybody please explain what is wrong with my code and how to fix it?
In the documentation of cconv it says that:
c = cconv(a,b,n) circularly convolves vectors a and b. n is the length of the resulting vector. If you omit n, it defaults to length(a)+length(b)-1. When n = length(a)+length(b)-1, the circular convolution is equivalent to the linear convolution computed with conv.
I believe that the reason for your problem is that you do not specify the 3rd input to cconv, which then selects the default value, which is not the right one for you. I have made an animation showing what happens when different values of n are chosen.
If you compare my result for n=200 to your plot you will see that the amplitude of your data is 10 times larger whereas the length of your linspace is 10 times bigger. This means that some normalization is needed, likely a multiplication by the linspace step.
Indeed, after proper scaling and choice of n we get the right result:
res = 100; % resolution
w = linspace(0,2*pi,res);
dx = diff(w(1:2)); % grid step
stem( linspace(0,2*pi,res), dx * cconv(sin(w),sin(w),res) );
This is the code I used for the animation:
hF = figure();
subplot(1,2,1); hS(1) = stem(1,cconv(1,1,1)); title('Autoscaling');
subplot(1,2,2); hS(2) = stem(1,cconv(1,1,1)); xlim([0,7]); ylim(50*[-1,1]); title('Constant limits');
w = linspace(0,2*pi,100);
for ind1 = 1:200
set(hS,'XData',linspace(0,2*pi,ind1));
set(hS,'YData',cconv(sin(w),sin(w),ind1));
suptitle("n = " + ind1);
drawnow
% export_fig(char("D:\BLABLA\F" + ind1 + ".png"),'-nocrop');
end
I'm trying to find two x values for each y value on a plot that is very similar to a Gaussian fn. The difficulty is that I need to be able to find the values of x for several values of y even when the gaussian fn is very close to zero.
I can't post an image due to being a new user, however think of a gaussian function and then the regions where it is close to zero on either side of the peak. This part where the fn is very close to reaching zero is where I need to find the x values for a given y.
What I've tried:
When the fn is discrete: I have tried interp1, however I get the error that it is not strictly monotonic increasing because of the many values that are close to zero.
When I fit a two-term gaussian:
I use fzero (fzero(function-yvalue)) however I get a lot of NaN's. These might be from me not having a close enough 'guess' value??
Does anyone have any other suggestions for me to try? Or how to improve what I've already attempted?
Thanks everyone
EDIT:
I've added a picture below. The data that I actually have is the blue line, while the fitted eqn is in red. The eqn should be accurate enough.
Again, I'm trying to pick out x values for a given y where y is very small (approaching 0).
I've tried splitting the function into left and right halves for the interpolation and fzero method.
Thanks for your responses anyway, I'll have a look at bisection.
Fitting a Gaussian seems to be uneffective, as its deviation (in the x-coordinate) from the real data is noticeable.
Since your data is already presented as a numeric vector y, the straightforward find(y<y0) seems adequate. Here is a sample code, in which the y-values are produced from a perturbed Gaussian.
x = 0:1:700;
y = 2000*exp(-((x-200)/50).^2 - sin(x/100).^2); % imitated data
plot(x,y)
y0 = 1e-2; % the y-value to look for
i = min(find(y>y0)); % first entry above y0
if i == 1
x1 = x(i);
else
x1 = x(i) - y(i)*(x(i)-x(i-1))/(y(i)-y(i-1)); % linear interpolation
end
i = max(find(y>y0)); % last entry above y0
if i == numel(y)
x2 = x(i);
else
x2 = x(i) - y(i)*(x(i)-x(i+1))/(y(i)-y(i+1)); % linear interpolation
end
fprintf('Roots: %g, %g \n', x1, x2)
Output: Roots: 18.0659, 379.306
The curve looks much like your plot.
I have a 1000 5x5 matrices (Xm) like this:
Each $(x_ij)m$ is a point estimate drawn from a distribution. I'd like to calculate the covariance cov of each $x{ij}$, where i=1..n, and j=1..n in the direction of the red arrow.
For example the variance of $X_m$ is `var(X,0,3) which gives a 5x5 matrix of variances. Can I calculate the covariance in the same way?
Attempt at answer
So far I've done this:
for m=1:1000
Xm_new(m,:)=reshape(Xm(:,:,m)',25,1);
end
cov(Xm_new)
spy(Xm_new) gives me this unusual looking sparse matrix:
If you look at cov (edit cov in the command window) you might see why it doesn't support multi-dimensional arrays. It perform a transpose and a matrix multiplication of the input matrices: xc' * xc. Both operations don't support multi-dimensional arrays and I guess whoever wrote the function decided not to do the work to generalize it (it still might be good to contact the Mathworks however and make a feature request).
In your case, if we take the basic code from cov and make a few assumptions, we can write a covariance function M-file the supports 3-D arrays:
function x = cov3d(x)
% Based on Matlab's cov, version 5.16.4.10
[m,n,p] = size(x);
if m == 1
x = zeros(n,n,p,class(x));
else
x = bsxfun(#minus,x,sum(x,1)/m);
for i = 1:p
xi = x(:,:,i);
x(:,:,i) = xi'*xi;
end
x = x/(m-1);
end
Note that this simple code assumes that x is a series of 2-D matrices stacked up along the third dimension. And the normalization flag is 0, the default in cov. It could be exapnded to multiple dimensions like var with a bit of work. In my timings, it's over 10 times faster than a function that calls cov(x(:,:,i)) in a for loop.
Yes, I used a for loop. There may or may not be faster ways to do this, but in this case for loops are going to be faster than most schemes, especially when the size of your array is not known a priori.
The answer below also works for a rectangular matrix xi=x(:,:,i)
function xy = cov3d(x)
[m,n,p] = size(x);
if m == 1
x = zeros(n,n,p,class(x));
else
xc = bsxfun(#minus,x,sum(x,1)/m);
for i = 1:p
xci = xc(:,:,i);
xy(:,:,i) = xci'*xci;
end
xy = xy/(m-1);
end
My answer is very similar to horchler, however horchler's code does not work with rectangular matrices xi (whose dimensions are different from xi'*xi dimensions).