Below is some MATLAB code which is inside a loop that increments position every cycle. However after this code has run, the values in the vector F1 are all NaN. I print A to the screen and it outputs the expected values.
% 500 Hz Bands
total_energy = sum(F(1:(F_L*8)));
A = (sum(F(1:F_L))) /total_energy %% correct value printed to screen
F1(position) = (sum(F(1:F_L))) /total_energy;
Any help is appreciated
Assuming that F_L = position * interval, I suggest you use something like:
cumulative_energy_distribution = cumsum(abs(F).^2);
positions = 1:q;
F1 = cumulative_energy_distribution(positions * interval) ./ cumulative_energy_distribution(positions * 8 * interval);
sum-of-squares, which is the energy density (and also seen in Parseval's theorem) is monotonically increasing, so you don't need to worry about the energy going back down to zero.
In MATLAB, if you divide zero by zero, you get a NaN. Therefore, its always better to add an infinitesimal number to the denominator such that its addition doesn't change the final result, but it avoids division by zero. You can choose any small value as that infinitesimal number (such as 10^-10) but MATLAB already has a variable called eps.
eps(n) gives a distance to the next-largest number (whose precision is same as n) which could be represented in MATLAB. For example, if n=1, next double-precision number you could get to from 1 is 1+eps(1)=1+2.2204e-16. If n=10^10, then the next number is 10^10+1.9073e-06. eps is same as eps(1)=2.2204e-16. Adding eps doesn't change the output and avoids the situation of 0/0.
Related
I have 2 nested loops which do the following:
Get two rows of a matrix
Check if indices meet a condition or not
If they do: calculate xcorr between the two rows and put it into new vector
Find the index of the maximum value of sub vector and replace element of LAG matrix with this value
I dont know how I can speed this code up by vectorizing or otherwise.
b=size(data,1);
F=size(data,2);
LAG= zeros(b,b);
for i=1:b
for j=1:b
if j>i
x=data(i,:);
y=data(j,:);
d=xcorr(x,y);
d=d(:,F:(2*F)-1);
[M,I] = max(d);
LAG(i,j)=I-1;
d=xcorr(y,x);
d=d(:,F:(2*F)-1);
[M,I] = max(d);
LAG(j,i)=I-1;
end
end
end
First, a note on floating point precision...
You mention in a comment that your data contains the integers 0, 1, and 2. You would therefore expect a cross-correlation to give integer results. However, since the calculation is being done in double-precision, there appears to be some floating-point error introduced. This error can cause the results to be ever so slightly larger or smaller than integer values.
Since your calculations involve looking for the location of the maxima, then you could get slightly different results if there are repeated maximal integer values with added precision errors. For example, let's say you expect the value 10 to be the maximum and appear in indices 2 and 4 of a vector d. You might calculate d one way and get d(2) = 10 and d(4) = 10.00000000000001, with some added precision error. The maximum would therefore be located in index 4. If you use a different method to calculate d, you might get d(2) = 10 and d(4) = 9.99999999999999, with the error going in the opposite direction, causing the maximum to be located in index 2.
The solution? Round your cross-correlation data first:
d = round(xcorr(x, y));
This will eliminate the floating-point errors and give you the integer results you expect.
Now, on to the actual solutions...
Solution 1: Non-loop option
You can pass a matrix to xcorr and it will perform the cross-correlation for every pairwise combination of columns. Using this, you can forego your loops altogether like so:
d = round(xcorr(data.'));
[~, I] = max(d(F:(2*F)-1,:), [], 1);
LAG = reshape(I-1, b, b).';
Solution 2: Improved loop option
There are limits to how large data can be for the above solution, since it will produce large intermediate and output variables that can exceed the maximum array size available. In such a case for loops may be unavoidable, but you can improve upon the for-loop solution above. Specifically, you can compute the cross-correlation once for a pair (x, y), then just flip the result for the pair (y, x):
% Loop over rows:
for row = 1:b
% Loop over upper matrix triangle:
for col = (row+1):b
% Cross-correlation for upper triangle:
d = round(xcorr(data(row, :), data(col, :)));
[~, I] = max(d(:, F:(2*F)-1));
LAG(row, col) = I-1;
% Cross-correlation for lower triangle:
d = fliplr(d);
[~, I] = max(d(:, F:(2*F)-1));
LAG(col, row) = I-1;
end
end
I have 20 values x1,...x20. Each value is between 0 and 1, for example 0.22,0.23,0.25,...
x = rand(20,1);
x = sort(x);
Now I would like to choose one data point but not uniform at random. The data point with the lowest value should have the highest probability and the other values should have a probability proportional to the difference in function value to the lowest value.
For example, if the lowest function value is 0.22, a data point with a function value of 0.23 has a difference to the best value of 0.23 - 0.22 = 0.01 and should therefore have a probability similar to the 0.22 value. But a value of 0.3 has a difference of 0.3 - 0.22 = 0.08 and should therefore have a much smaller probability.
How can this be done?
I would leave this as a comment, but I unfortunately don't have the rep yet.
This looks interesting, and I have a few questions for you. (I will edit this answer to be an answer later.)
The data point with the lowest value should have the highest probability and the other values should have a probability proportional to the difference in function value to the lowest value.
Lets take an array of 20 items, and subtract the lowest number from the entire array. This leaves us with our smallest value (which you want to be the most probable) as 0. We need to define a function now, that goes over all of the points and integrates to 1.
I've done the following:
x = rand(20, 1);
x = sort(x);
xx = x - x(1);
I suppose at this point we can invert our answers so the lowest point is 1.
Px = 1 - xx; %For probabilities
TotalP = sum(Px);
Now we have everything we need, I think... So lets see what we can make.
P = Px/TotalP; %This will be our probability.
SanityCheck = sum(P); %Make sure that it sums up to 1.
Looks like that works, so lets make our cumulative sum array, and get an element.
PI = cumsum(P); %This will be the integral form of the probability function.
test = rand; %Create a test number so we can place it in the integral function
index = find(PI > test, 1); %This will return the first entry that is greater than our test value...
result = x(index); %And here's our value
I hope this is along what you were looking for. If not, please comment and I'll get back to you. :)
[edited to incorporate comments]
Assume that I have vector shown in the figure below. By common sense, we can see that there are 2 values which suddenly depart from the trend of the vector.
How do I eliminate these sudden changes. I mean how do I automatically detect and replace these noise values by the average value of their neighbors.
Define a threshold, compute the average values, then compare the relative error between the values and the averages of their neighbors:
threshold = 5e-2;
averages = [v(1); (v(3:end) + v(1:end-2)) / 2; v(end)];
is_outlier = (v.^2 - averages.^2) > threshold^2 * averages.^2;
Then replace the outliers:
v(is_outlier) = averages(is_outlier);
I have two signals and I calculated the local peaks of each signal and saved them in two different vectors for amplitude and another two for timing. I need to get the intersection between peaks. Each peak has a value and a time so I am trying to extract the peaks which has nearly the same amplitude at nearly the same time .. Any help??
My Code:
[svalue1,stime1] = findpeaks(O1);
[svalue2,stime2] = findpeaks(O2);
%note that the peaks count is different in each signal
% This is my try but it is not working
x = length(intersect(ceil(svalue1),ceil(svalue2)))/min(length(svalue1),length(svalue2));
It is my understanding what you want to determine those values in svalue1 and svalue2 that are similar to each other, and what's more important is that they are unequal in length.
What you can do is compare every value in svalue1 with every value in svalue2 and if the difference between a value in svalue1 and a value in svalue2 is less than a certain amount, then we would classify these two elements to be the same.
This can be achieved by bsxfun with the #minus function and eliminating any sign changes with abs. After, we can determine the locations where the values are below a certain amount.
Something like this:
tol = 0.5; %// Adjust if necessary
A = abs(bsxfun(#minus, svalue1(:), svalue2(:).')) <= tol;
[row,col] = find(A);
out = [row,col];
tol is the tolerance that we would use to define whether two values are close together. I chose this to be 0.5, but adjust this for your application. out is a 2D matrix that tells you which value in svalue1 was closest to svalue2. Rather than giving a verbose explanation, let's just show you an example of this working and we can explain along the way.
Let's try this on an example:
>> svalue1 = [0 0.1 1 2.2 3];
>> svalue2 = [0.1 0.2 2 3 4];
Running the above code, we get:
>> out
ans =
1 1
2 1
1 2
2 2
4 3
5 4
Now this makes sense. Each row tells you which value in svalue1 is close to svalue2. For example, the first row says that the first value in svalue1, or 0 is close to the second value in svalue2 or 0.1. The next row says that the second value of svalue1, or 0.2, is close to the first value of svalue2, or 0.
Obviously, this operation includes non-unique values. For example, the row with [1 2] and [2 1] are the same. I'm assuming this isn't a problem, so we'll leave that alone.
Now what I didn't cover is whether the peaks also happen within the same time value. This can be done by performing another bsxfun operation on the time vector values of stime1 and stime2 much like we did with svalue1 and svalue2, and performing a logical AND operation between the two matrices. Should the peaks be the same in both amplitude and time, then the result follows.... so something like this:
tol_amplitude = 5; %// Adjust if necessary
tol_time = 0.5;
A = abs(bsxfun(#minus, svalue1(:), svalue2(:).')) <= tol_amplitude;
Atime = abs(bsxfun(#minus, stime1(:), stime2(:).')) <= tol_time;
Afinal = A & Atime;
[row,col] = find(Afinal);
out = [row,col];
You'll notice that we have two thresholds for the time and the amplitudes. Adjust both if necessary. out will contain the results like we saw earlier, but these will give you those indices that are close in both time and amplitude. If you want to see what those are, you can do something like this:
peaks = [svalue1(out(:,1)) svalue2(out(:,2))];
times = [stime1(out(:,1)) stime2(out(:,2))];
peaks and times will give you what the corresponding peaks and times were that would be considered as "close" between the two signals. The first column denotes the peaks and times for the first signal and the second column is for the peaks and times for the second signal. The difference between columns should be less than their prescribed thresholds.
I've got a huge array of values, all or which are much smaller than 1, so using a round up/down function is useless. Is there anyway I can use/make the 'find' function on these non-integer values?
e.g.
ind=find(x,9.5201e-007)
FWIW all the values are in acceding sequential order in the array.
Much appreciated!
The syntax you're using isn't correct.
find(X,k)
returns k non-zero values, which is why k must be an integer. You want
find(x==9.5021e-007);
%# ______________<-- logical index: ones where condition is true, else zeros
%# the single-argument of find returns all non-zero elements, which happens
%# at the locations of your value of interest.
Note that this needs to be an exact representation of the floating point number, otherwise it will fail. If you need tolerance, try the following example:
tol = 1e-9; %# or some other value
val = 9.5021e-007;
find(abs(x-val)<tol);
When I want to find real numbers in some range of tolerance, I usually round them all to that level of toleranace and then do my finding, sorting, whatever.
If x is my real numbers, I do something like
xr = 0.01 * round(x/0.01);
then xr are all multiples of .01, i.e., rounded to the nearest .01. I can then do
t = find(xr=9.22)
and then x(t) will be every value of x between 9.2144444444449 and 9.225.
It sounds from your comments what you want is
`[b,m,n] = unique(x,'first');
then b will be a sorted version of the elements in x with no repeats, and
x = b(n);
So if there are 4 '1's in n, it means the value b(1) shows up in x 4 times, and its locations in x are at find(n==1).