Some MatLab code I'm trying to simplify goes through the effort of finding FFT, only to take the absolute value, and then the mean:
> vector = [0 -1 -2 -1 0 +1 +2 +1];
> mean(abs(fft(vector)))
ans = 2
All these coefficients are built up, then chopped back down to a single value. The platform I'm working on is really limited, and if I can get by without using FFT, all the better. Is there a way to approximate these operations without FFT?
Can assume vector will be at most 64 values in length.
I think this is a just a very inefficient way of calculating the RMS value of the signal. See Parseval's Theorem. You can probably just simplify the expression to:
sqrt(mean(vector.*vector))
If your vector is real and has zero average as in your example, you can take advantage of the fact that the two halves of the DFT are complex-conjugate (from basic signal processing) and save half the abs computations. Also, use sum, which is faster than mean:
fft_vector = fft(vector);
len = length(fft_vector)/2;
sum(abs(fft_vector(1:len)))/len
Related
I'm trying to code a MATLAB program and I have arrived at a point where I need to do the following. I have this equation:
I must find the value of the constant "Xcp" (greater than zero), that is the value that makes the integral equal to zero.
In order to do so, I have coded a loop in which the the value of Xcp advances with small increments on each iteration and the integral is performed and checked if it's zero, if it reaches zero the loop finishes and the Xcp is stored with this value.
However, I think this is not an efficient way to do this task. The running time increases a lot, because this loop is long and has the to perform the integral and the integration limits substitution every time.
Is there a smarter way to do this in Matlab to obtain a better code efficiency?
P.S.: I have used conv() to multiply both polynomials. Since cl(x) and (x-Xcp) are both polynomials.
EDIT: Piece of code.
p = [1 -Xcp]; % polynomial (x-Xcp)
Xcp=0.001;
i=1;
found=false;
while(i<=x_te && found~=true) % Xcp is upper bounded by x_te
int_cl_p = polyint(conv(cl,p));
Cm_cp=(-1/c^2)*diff(polyval(int_cl_p,[x_le,x_te]));
if(Cm_cp==0)
found=true;
else
Xcp=Xcp+0.001;
end
end
This is the code I used to run this section. Another problem is that I have to do it for different cases (different cl functions), for this reason the code is even more slow.
As far as I understood, you need to solve the equation for X_CP.
I suggest using symbolic solver for this. This is not the most efficient way for large polynomials, but for polynomials of degree 20 it takes less than 1 second. I do not claim that this solution is fastest, but this provides generic solution to the problem. If your polynomial does not change every iteration, then you can use this generic solution many times and not spend time for calculating integral.
So, generic symbolic solution in terms of xLE and xTE is obtained using this:
syms xLE xTE c x xCP
a = 1:20;
%//arbitrary polynomial of degree 20
cl = sum(x.^a.*randi([-100,100],1,20));
tic
eqn = -1/c^2 * int(cl * (x-xCP), x, xLE, xTE) == 0;
xCP = solve(eqn,xCP);
pretty(xCP)
toc
Elapsed time is 0.550371 seconds.
You can further use matlabFunction for finding the numerical solutions:
xCP_numerical = matlabFunction(xCP);
%// we then just plug xLE = 10 and xTE = 20 values into function
answer = xCP_numerical(10,20)
answer =
19.8038
The slight modification of the code can allow you to use this for generic coefficients.
Hope that helps
If you multiply by -1/c^2, then you can rearrange as
and integrate however you fancy. Since c_l is a polynomial order N, if it's defined in MATLAB using the usual notation for polyval, where coefficients are stored in a vector a such that
then integration is straightforward:
MATLAB code might look something like this
int_cl_p = polyint(cl);
int_cl_x_p = polyint([cl 0]);
X_CP = diff(polyval(int_cl_x_p,[x_le,x_te]))/diff(polyval(int_cl_p,[x_le,x_te]));
Has anyone tried plotting a sine function for large values in MATLAB?
For e.g.:
x = 0:1000:100000;
plot(x,sin(2*pi*x))
I was just wondering why the amplitude is changing for this periodic function? As per what I expect, for any value of x, the function has a period of 2*pi. Why is it not?
Does anyone know? Is there a way to get it right? Also, is this a bug and is it already known?
That's actually not the amplitude changing. That is due to the numerical imprecisions of floating point arithmetic. Bear in mind that you are specifying an integer sequence from 0 to 100000 in steps of 1000. If you recall from trigonometry, sin(n*x*pi) = 0 when x and n are integers, and so theoretically you should be obtaining an output of all zeroes. In your case, n = 2, and x is a number from 0 to 100000 that is a multiple of 1000.
However, this is what I get when I use the above code in your post:
Take a look at the scale of that graph. It's 10^{-11}. Do you know how small that is? As further evidence, here's what the max and min values are of that sequence:
>> min(sin(2*pi*x))
ans =
-7.8397e-11
>> max(sin(2*pi*x))
ans =
2.9190e-11
The values are so small that they might as well be zero. What you are visualizing in the graph is due to numerical imprecision. As I mentioned before, sin(n*x*pi) = 0 when n and x is are integers, under the assumption that we have all of the decimal places of pi available. However, because we only have 64-bits total to represent pi numerically, you will certainly not get the result to be exactly zero. In addition, be advised that the sin function is very likely to be using numerical approximation algorithms (Taylor / MacLaurin series for example), and so that could also contribute to the fact that the result may not be exactly 0.
There are, of course, workarounds, such as using the symbolic mathematics toolbox (see #yoh.lej's answer). In this case, you will get zero, but I won't focus on that here. Your post is questioning the accuracy of the sin function in MATLAB, that works on numeric type inputs. Theoretically with your input into sin, as it is an integer sequence, every value of x should make sin(n*x*pi) = 0.
BTW, this article is good reading. This is what every programmer needs to know about floating point arithmetic and functions: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html. A more simple overview can be found here: http://floating-point-gui.de/
Because what is the exact value of pi?
This apparent error is due to the limit of floating point accuracy. If you really need/want to get around that you can do symbolic computation with matlab, see the difference between:
>> sin(2*pi*10)
ans =
-2.4493e-15
and
>> sin(sym(2*pi*10))
ans =
0
I would like to convolve a time-series containing two spikes (call it Spike) with an exponential kernel (k) in MATLAB. Call the convolved response "calcium1". I would like to recover the original spike ("reconSpike") data using deconvolution with the kernel. I am using the following code.
k1=zeros(1,5000);
k1(1:1000)=(1.1.^((1:1000)/100)-(1.1^0.01))/((1.1^10)-1.1^0.01);
k1(1001:5000)=exp(-((1001:5000)-1001)/1000);
k1(1)=k1(2);
spike = zeros(100000,1);
spike(1000)=1;
spike(1100)=1;
calcium1=conv(k1, spike);
reconSpike1=deconv(calcium1, k1);
The problem is that at the end of reconSpike, I get a chunk of very large, high amplitude waves that was not in the original data. Anyone know why and how to fix it?
Thanks!
It works for me if you keep the spike vector the same length as the k1 vector. i.e.:
k1=zeros(1,5000);
k1(1:1000)=(1.1.^((1:1000)/100)-(1.1^0.01))/((1.1^10)-1.1^0.01);
k1(1001:5000)=exp(-((1001:5000)-1001)/1000);
k1(1)=k1(2);
spike = zeros(5000, 1);
spike(1000)=1;
spike(1100)=1;
calcium1=conv(k1, spike);
reconSpike1=deconv(calcium1, k1);
Any reason you made them different?
You are running into either a problem with MATLAB's deconvolution algorithm, or floating point precision problems (or maybe both). I suspect it's floating point precision due to all the divisions and subtractions that take place during the deconvolution, but it might be worth contacting MathWorks directly to ask what they think.
Per MATLAB documentation, if [q,r] = deconv(v,u), then v = conv(u,q)+r must also hold (i.e., the output of deconv should always satisfy this). In your case this is violently violated. Put the following at the end of your script:
[reconSpike1 rem]=deconv(calcium1, k1);
max(conv(k1, reconSpike1) + rem - calcium1)
I get 6.75e227, which is not zero ;-) Next try changing the length of spike to 6000; you will get a small number (~1e-15). Gradually increase the length of spike; the error will get larger and larger. Note that if you put only one non-zero element into your spike, this behavior doesn't happen: the error is always zero. It makes sense; all MATLAB needs to do is divide everything by the same number.
Here's a simple demonstration using random vectors:
v = random('uniform', 1,2,100,1);
u = random('uniform', 1,2,100,1);
[q r] = deconv(v,u);
fprintf('maximum error for length(v) = 100 is %f\n', max(conv(u, q) + r - v))
v = random('uniform', 1,2,1000,1);
[q r] = deconv(v,u);
fprintf('maximum error for length(v) = 1000 is %f\n', max(conv(u, q) + r - v))
The output is:
maximum error for length(v) = 100 is 0.000000
maximum error for length(v) = 1000 is 14.910770
I don't know what you are really trying to accomplish, so it's hard to give further advice. But I'll just point out that if you have a problem where pulses are piling up and you want to extract information about each pulse, this can be a tricky problem. I know some people who work on things like this, so if you want some references let me know and I will ask them.
You should never expect that a deconvolution can simply undo a convolution. This is because the deconvolution is an ill-posed problem.
The problem comes from the fact that the convolution is an integral operator (in the continuous case you write down an integral int f(x) g(x-t) dx or something similar). Now, the inverse of computing an integral (the de-convolution) is to apply a differentiation. Unfortunately, the differential amplifies noise in the input. Thus, if your integral only has slight errors on it (and floating-point inaccuarcies might already be enough), you end up with a total different outcome after differentiation.
There are some possibilities how this amplification can be mitigated but these have to be tried on a per-application basis.
When taking fft(signal, nfft) of a signal, how does nfft change the outcome and why? Can I have a fixed value for nfft, say 2^18, or do I need to go 2^nextpow2(2*length(signal)-1)?
I am computing the power spectral density(PSD) of two signals by taking the FFT of the autocorrelation, and I want to compare the the results. Since the signals are of different lengths, I am worried if I don't fix nfft, it would make the comparison really hard!
There is no inherent reason to use a power-of-two (it just might make the processing more efficient in some circumstances).
However, to make the FFTs of two different signals "commensurate", you will indeed need to zero-pad one or other (or both) signals to the same lengths before taking their FFTs.
However, I feel obliged to say: If you need to ask this, then you're probably not at a point on the DSP learning curve where you're going to be able to do anything useful with the results. You should get yourself a decent book on DSP theory, e.g. this.
Most modern FFT implementations (including MATLAB's which is based on FFTW) now rarely require padding a signal's time series to a length equal to a power of two. However, nearly all implementations will offer better, and sometimes much much better, performance for FFT's of data vectors w/ a power of 2 length. For MATLAB specifically, padding to a power of 2 or to a length with many low prime factors will give you the best performance (N = 1000 = 2^3 * 5^3 would be excellent, N = 997 would be a terrible choice).
Zero-padding will not increase frequency resolution in your PSD, however it does reduce the bin-size in the frequency domain. So if you add NZeros to a signal vector of length N the FFT will now output a vector of length ( N + NZeros )/2 + 1. This means that each bin of frequencies will now have a width of:
Bin width (Hz) = F_s / ( N + NZeros )
Where F_s is the signal sample frequency.
If you find that you need to separate or identify two closely space peaks in the frequency domain, you need to increase your sample time. You'll quickly discover that zero-padding buys you nothing to that end - and intuitively that's what we'd expect. How can we expect more information in our power spectrum w/o adding more information (longer time series) in our input?
Best,
Paul
I have been give a very large matrix (I cannot change the values of the matrix) and I need to calculate the inverse of a (covariance) matrix.
Sometimes I get the error saying
Matrix is close to singular or badly scaled.
Results may be inaccurate
In these situations I see that the value of the det returns 0.
Before calculating inverse (of a covariance matrix) I want to check the value of the det and perform something like this
covarianceFea=cov(fea_class);
covdet=det(covarianceFea);
if(covdet ==0)
covdet=covdet+.00001;
%calculate the covariance using this new det
end
Is there any way to use the new det and then use this to calculate the inverse of the covariance matrix?
Sigh. Computation of the determinant to determine singularity is a ridiculous thing to do, utterly so. Especially so for a large matrix. Sorry, but it is. Why? Yes, some books tell you to do it. Maybe even your instructor.
Analytical singularity is one thing. But how about numerical determination of singularity? Unless you are using a symbolic tool, MATLAB uses floating point arithmetic. This means it stores numbers as floating point, double precision values. Those numbers cannot be smaller in magnitude than
>> realmin
ans =
2.2251e-308
(Actually, MATLAB goes a bit lower than that, in terms of denormalized numbers, which can go down to approximately 1e-323.) See that when I try to store a number smaller than that, MATLAB thinks it is zero.
>> A = 1e-323
A =
9.8813e-324
>> A = 1e-324
A =
0
What happens with a large matrix? For example, is this matrix singular:
M = eye(1000);
Since M is an identity matrix, it is fairly clearly non-singular. In fact, det does suggest that it is non-singular.
>> det(M)
ans =
1
But, multiply it by some constant. Does that make it non-singular? NO!!!!!!!!!!!!!!!!!!!!!!!! Of course not. But try it anyway.
>> det(M*0.1)
ans =
0
Hmm. Thats is odd. MATLAB tells me the determinant is zero. But we know that the determinant is 1e-1000. Oh, yes. Gosh, 1e-1000 is smaller, by a considerable amount than the smallest number that I just showed you that MATLAB can store as a double. So the determinant underflows, even though it is obviously non-zero. Is the matrix singular? Of course not. But does the use of det fail here? Of course it will, and this is completely expected.
Instead, use a good tool for the determination of singularity. Use a tool like cond, or rank. For example, can we fool rank?
>> rank(M)
ans =
1000
>> rank(M*.1)
ans =
1000
See that rank knows this is a full rank matrix, regardless of whether we scale it or not. The same is true of cond, computing the condition number of M.
>> cond(M)
ans =
1
>> cond(M*.1)
ans =
1
Welcome to the world of floating point arithmetic. And oh, by the way, forget about det as a tool for almost any computation using floating point arithmetic. It is a poor choice almost always.
Woodchips has given you a very good explanation for why you shouldn't use the determinant. This seems to be a common misconception and your question is very related to another question on inverting matrices: Is there a fast way to invert a matrix in Matlab?, where the OP decided that because the determinant of his matrix was 1, it was definitely invertible! Here's a snippet from my answer
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
You can test this in MATLAB with the following simple example:
A = eye(10);
A([1 2]) = [1e15 1e-15];
%# calculate determinant
det(A)
ans =
1
%# calculate condition number
cond(A)
ans =
1.0000e+30
In such a scenario, calculating an inverse is not a very good idea. If you just have to do it, I would suggest using this to increase display precision:
format long;
Other suggestion could be to try using an SVD of the matrix and tinker around with singular values there.
A = U∑V'
inv(A) = V*inv(∑)*U'
∑ is a diagonal matrix where you will see one of the diagonal entries close to 0. Try playing around with this number if you want some sort of an approximation.