Finding peaks value in an ECG signal starting from the filtered ones in MATLAB - matlab

I have a table element containing an ECG signal. This has been filtered in order to find the peaks values and peaks locations (in terms of time). The task is to find the peak values in the original signal starting from the peaks I found within a range of 40 elements. This is what I tried:
for i=1:length(peaks_ECG_IV)
A=x_ECG_IV(i) - 20*Tc :Tc: x_ECG_IV(i) + 20*Tc;
if x_ECG_IV(i) + 20*Tc < x_ECG_IV(length(peaks_ECG_IV)) && x_ECG_IV(i) - 20*Tc > x_ECG_IV(1)
[y_i,x_i] = findpeaks(DatiECGPPG.ECGLeadIV,A);* %DatiECGPPG.ECGLeadIV and A have different size*
end
end
Tc is 1/Fsamp
x_ECG_IV(i) is the peak location
peaks_ECG_IV is the peak value in the filter signal which is known
y_i and x_i are the value and the location of the peaks I need to determine.
DatiECGPPG.ECGLeadIV is the whole array of ECG values (100000 elements)
The issue is that A and DatiECGPPG.ECGLeadIV have different sizes so I think I need to consider only the interval of 40 values of DatiECGPPG.ECGLeadIV that correspond to A but I can't figure out how.

Related

Why number of peaks of my signal stay same when I increase n in n-point moving average filter when data is big?

I am using MATLAB to find the number of peaks of a signal.
I'm trying to plot the number of peaks of a signal filtered with N-point moving average filter, N goes from 2 to 30.(I also consider the number of peaks when no filter has applied at the beginning of the resulting array) My data array(imported from csv and has double values between 0 and 1) has around 50k points. When I give part of the data i.e 100, 500 or 1000 points, using array slicing, # of peaks decrease as expected. However, when I give the whole data or even 2000 points, the number of peaks stays same at 127.
I changed the number of data given to the filter to find out why this happens. I changed the commented lines like showed in the comment and tried. When less than 1000 data points given plot was fine.
Here is the signal
https://www.dropbox.com/s/e1bkcjn5ta5q610/exampleSignal.csv?dl=0
Please import it from 4th element to end, it has some strange data at the beginning, I have not taken them, VarName1 is the imported column vector's name
numberOfPeaks = zeros(30,1,'int8');
pks = findpeaks(VarName1); % VarName1(1:1000,:) (when no filter applied)
numberOfPeaks(1) = size(pks,1);
for i=2:30
h = 1/i*ones(1,i,'double');
y = filter(h,1,VarName1); % VarName1(1:1000,:)
numberOfPeaks(i) = size(findpeaks(y),1);
end
plot(1:30,numberOfPeaks);
I expect a plot like this when whole the data is given:
but I get:
I realised that the problem is int8 I use. It can only take up to 127 and this caused my big results to be as 127.
Turning it into double solves the problem.

How to find the index of specific value in a vector?

I have a 100*20 matrix called pr (power receive in my case) the 100 represent number of users and 20 number of antennas each user receive certain power from the 20 antennas.(more than one user could receive power from the same antenna).
then i find the maximum power each user receive and put it in a 100*1 vector If this maximum values greater than (-112) counter increase. I need to create new vector 20*1 where 20 is the antennas number and count the number of users that receive power greater than(-112) for each antenna
[master_ant,id]=max(pr,[],2); %find vector of max values and vector of the corresponding index
for i=1:100
if(master_ant(i)>=-112) %check the rang
covered_user=covered_user+1;%counter increment
end
end
i tried this
[master_ant,id]=max(pr,[],2);
for i=1:100
if(master_ant(i)>=-112)
covered_user(id)=covered_user(id)+1;
The easiest way to do this is to consider another approach. The function sum, can actually (and is supposed to) do all the job for you.
a = randi([-130, -60],100,20); % Example matrix
covered_user = sum(a>=-112); % One-liner solution

How to identify an optimal subsample from a data set with missing values in MATLAB

I would like to identify the largest possible contiguous subsample of a large data set. My data set consists of roughly 15,000 financial time series of up to 360 periods in length. I have imported the data into MATLAB as a 360 by 15,000 numerical matrix.
This matrix contains a lot of NaNs due to some of the financial data not being available for the entire period. In the illustration, NaN entries are shown in dark blue, and non-NaN entries appear in light blue. It is these light blue non-NaN entries which I would like to ideally combine into an optimal subsample.
I would like to find the largest possible contiguous block of data that is contained in my matrix, while ensuring that my matrix contains a sufficient number of periods.
In a first step I would like to sort my matrix from left to right in descending order by the number of non-NaN entries in each column, that is, I would like to sort by the vector obtained by entering sum(~isnan(data),1).
In a second step I would like to find the sub-array of my data matrix that is at least 72 entries along the first dimension and is otherwise as large as possible, measured by the total number of entries.
What is the best way to implement this?
A big warning (may or may not apply depending on context)
As Oleg mentioned, when an observation is missing from a financial time series, it's often missing for reason: eg. the entity went bankrupt, the entity was delisted, or the instrument did not trade (i.e. illiquid). Constructing a sample without NaNs is likely equivalent to constructing a sample where none of these events occur!
For example, if this were hedge fund return data, selecting a sample without NaNs would exclude funds that blew up and ceased trading. Excluding imploded funds would bias estimates of expected returns upwards and estimates of variance or covariance downwards.
Picking a sample period with the fewest time series with NaNs would also exclude periods like the 2008 financial crisis, which may or may not make sense. Excluding 2008 could lead to an underestimate of how haywire things could get (though including it could lead to overestimate the probability of certain rare events).
Some things to do:
Pick a sample period as long as possible but be aware of the limitations.
Do your best to handle survivorship bias: eg. if NaNs represent delisting events, try to get some kind of delisting return.
You almost certainly will have an unbalanced panel with missing observations, and your algorithm will have to be deal with that.
Another general finance / panel data point, selecting a sample at some time point t and then following it into the future is perfectly ok. But selecting a sample based upon what happens during or after the sample period can be incredibly misleading.
Code that does what you asked:
This should do what you asked and be quite fast. Be aware of the problems though if whether an observation is missing is not random and orthogonal to what you care about.
Inputs are a T by n sized matrix X:
T = 360; % number of time periods (i.e. rows) in X
n = 15000; % number of time series (i.e. columns) in X
T_subsample = 72; % desired length of sample (i.e. rows of newX)
% number of possible starting points for series of length T_subsample
nancount_periods = T - T_subsample + 1;
nancount = zeros(n, nancount_periods, 'int32'); % will hold a count of NaNs
X_isnan = int32(isnan(X));
nancount(:,1) = sum(X_isnan(1:T_subsample, :))'; % 'initialize
% We need to obtain a count of nans in T_subsample sized window for each
% possible time period
j = 1;
for i=T_subsample + 1:T
% One pass: add new period in the window and subtract period no longer in the window
nancount(:,j+1) = nancount(:,j) + X_isnan(i,:)' - X_isnan(j,:)';
j = j + 1;
end
indicator = nancount==0; % indicator of whether starting_period, series
% has no NaNs
% number of nonan series of length T_subsample by starting period
max_subsample_size_by_starting_period = sum(indicator);
max_subsample_size = max(max_subsample_size_by_starting_period);
% find the best starting period
starting_period = find(max_subsample_size_by_starting_period==max_subsample_size, 1);
ending_period = starting_period + T_subsample - 1;
columns_mask = indicator(:,starting_period);
columns = find(columns_mask); %holds the column ids we are using
newX = X(starting_period:ending_period, columns_mask);
Here's an idea,
Assuming you can rearrange the series, calculate the distance (you decide the metric, but if looking at is nan vs not is nan, Hamming is ok).
Now hierarchically cluster the series and rearrange them using either a dendrogram
or http://www.mathworks.com/help/bioinfo/examples/working-with-the-clustergram-function.html
You should probably prune any series that doesn't have a minimum number of non nan values before you start.
First I have only little insight in financial mathematics. I understood it that you want to find the longest continuous chain of non-NaN values for each time series. The time series should be sorted depending on the length of this chain and each time series, not containing a chain above a threshold, discarded. This can be done using
data = rand(360,15e3);
data(abs(data) <= 0.02) = NaN;
%% sort and chop data based on amount of consecutive non-NaN values
binary_data = ~isnan(data);
% find edges, denote their type and calculate the biggest chunk in each
% column
edges = [2*binary_data(1,:)-1; diff(binary_data, 1)];
chunk_size = diff(find(edges));
chunk_size(end+1) = numel(edges)-sum(chunk_size);
[row, ~, id] = find(edges);
num_row_elements = diff(find(row == 1));
num_row_elements(end+1) = numel(chunk_size) - sum(num_row_elements);
%a chunk of NaN has a -1 in id, a chunk of non-NaN a 1
chunks_per_row = mat2cell(chunk_size .* id,num_row_elements,1);
% sort by largest consecutive block of non-NaNs
max_size = cellfun(#max, chunks_per_row);
[max_size_sorted, idx] = sort(max_size, 'descend');
data_sorted = data(:,idx);
% remove all elements that only have block sizes smaller then some number
some_number = 20;
data_sort_chop = data_sorted(:,max_size_sorted >= some_number);
Note that this can be done a lot simpler, if the order of periods within a time series doesn't matter, aka data([1 2 3],id) and data([3 1 2], id) are identical.
What I do not know is, if you want to discard all periods within a time series that don't correspond to the biggest value, get all those chains as individual time series, ...
Feel free to drop a comment if it has to be more specific.

Matlab: how to find fundamental frequency from list of energy peaks

In a spectrogram, I have a set of harmonic frequencies (peaks in the spectrum) for a given time frame:
5215
3008.1
2428.1
2214.9
1630.2
1315
997.01
881.39
779.04
667.47
554.21
445.77
336.39
237.69
124.6
If I do -diff(ans), I get the differences between the formants, which hint me to the fact that the fundamental frequency f_0 of this frame is around 110 Hz:
2206.9
580.06
213.11
584.72
315.24
317.97
115.62
102.35
111.57
113.26
108.44
109.38
98.705
113.08
It is clear that the last 9 values of the first list are harmonics of the same f_0, because the last 8 values of the second list are around the same value. Their mean is 109.05 (but I'm not sure if that is the correct f_0). How can I calculate f_0 in a neat function?
I found an answer myself: I calculate the difference between the two peaks with the lowest frequency values and with energy values above a certain threshold. Then, I check if that difference is (within a certain range) in the list of frequencies.

Pruning data for better viewing on loglog graph - Matlab

just wondering if anyone has any ideas about an issue I'm having.
I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge (> 1e3) it's really difficult to see what data is what.
There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution.
What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one?
My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades):
//%grab indicies per decade
ind12 = find(y >= 1e1 & y <= 1e2);
indlow = find(y < 1e2);
indhigh = find(y > 1e4);
ind23 = find(y >+ 1e2 & y <= 1e3);
ind34 = find(y >+ 1e3 & y <= 1e4);
//%We want ind12 indexes in this decade, find spacing
tot23 = round(length(ind23)/length(ind12));
tot34 = round(length(ind34)/length(ind12));
//%grab ones to keep
ind23keep = ind23(1):tot23:ind23(end);
ind34keep = ind34(1):tot34:ind34(end);
indnew = [indlow' ind23keep ind34keep indhigh'];
loglog(x(indnew), y(indnew));
But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale.
Any ideas on how I can do this?
I think the easiest way to do this would be to use the LOGSPACE function to generate a set of indices into your data. For example, to create a set of 100 points logarithmically spaced from 1 to N (the number of points in your data), you can try the following:
indnew = round(logspace(0,log10(N),100)); %# Create the log-spaced index
indnew = unique(indnew); %# Remove duplicate indices
loglog(x(indnew),y(indnew)); %# Plot the indexed data
Creating a logarithmically-spaced index like this will result in fewer values being chosen from the end of the vector relative to the start, thus pruning values more severely towards the end of the vector and improving the appearance of the log plot. It would therefore be most effective with vectors that are sorted in ascending order.
The way I understand the problem is that your x-values are linearly spaced, so that if you plot them logarithmically, there are way more data points in 'higher' decades, so that markers lie extremely close to one another. For example, if x goes from 1 to 1000, there are 10 points in the first decade 90 in the second, and 900 in the third. You want to have, say, 3 points per decade instead.
I see two ways to solve the problem. The easier one is to use differently colored lines instead of different markers. Thus, you don't sacrifice any data points, and you can still distinguish everything.
The second solution is to create an unevenly spaced index. Here's how you can do that.
%# create some data
x = 1:1000;
y = 2.^x;
%# plot the graph and see the dots 'coalesce' very quickly
figure,loglog(x,y,'.')
%# for the example, I use a step size of 0.7, which is `log(1)`
xx = 0.7:0.7:log(x(end)); %# this is where I want the data to be plotted
%# find the indices where we want to plot by finding the closest `log(x)'-values
%# run unique to avoid multiples of the same index
indnew = unique(interp1(log(x),1:length(x),xx,'nearest'));
%# plot with fewer points
figure,loglog(x(indnew),y(indnew),'.')