I need to compute the length of a wav file by trimming off the "relative" silence at the beginning and end of the file then return the duration in milliseconds. I know you can find the duration of a wav file as such:
[w,fs] = wavread('file.wav');
length = length(w)/fs;
My logic is to use the first column of the waveform matrix (left channel), get an arbitrary threshold value, then traverse the matrix through sample windows. If the maximum value of these windows is greater than the threshold then I start counting time there. When the max of this window is less than the value I stop there. This is what I have so far:
%Just use the first column in the waveform matrix, 2 gives strange results
w = w(:,1);
%Get threshold value by multiplying max amplitude of waveform and
%predetermined percentage (this varies with testing)
thold = max(w) * .04;
Just need help on how to actually traverse the matrix through sampling windows.
I am not entirely certain what you want to achieve, but I think you can use your own suggestion (sort of):
soundIdx = find(abs(w) > thold)
total_time_w_sound = sum(abs(w) > thold)/fs;
time_from_first_sound_to_last = (soundIdx(end)-soundIdx(1))/fs;
Is it one of those you are trying to find?
Related
I am relatively new to matlab. I found the consecutive mean of a set of 1E6 random numbers that has mean and standard deviation. Initially the calculated mean fluctuate and then converges to a certain value.
I will like to know the index (i.e 100th position) at which the mean converges. I have no idea how to do that.
I tried using the logical operator but i have to go through 1e6 data points. Even with that i still can't find the index.
Y_c= sigma_c * randn(n_r, 1) + mu_c; %Random number creation
Y_f=sigma_f * randn(n_r, 1) + mu_f;%Random number creation
P_u=gamma*(B*B)/2.*N_gamma+q*B.*N_q + Y_c*B.*N_c; %Calculation of Ultimate load
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u))); %Progressive Cumulative Mean of system response
logical(diff(prog_mu==0)); %Find index
I suspect the issue is that the mean will never truly be constant, but will rather fluctuate around the "true mean". As such, you'll most likely never encounter a situation where the two consecutive values of the cumulative mean are identical. What you should do is determine some threshold value, below which you consider fluctuations in the mean to be approximately equal to zero, and compare the difference of the cumulative mean to that value. For instance:
epsilon = 0.01;
const_ind = find(abs(diff(prog_mu))<epsilon,1,'first');
where epsilon will be the threshold value you choose. The find command will return the index at which the variation in the cumulative mean first drops below this threshold value.
EDIT: As was pointed out, this method may potentially fail if the first few random numbers are generated such that the difference between them is less than the epsilon value, but have not yet converged. I would like to suggest a different approach, then.
We calculate the cumulative means, as before, like so:
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u)));
We also calculate the difference in these cumulative means, as before:
df_prog_mu = diff(prog_mu);
Now, to ensure that conversion has been achieved, we find the first index where the cumulative mean is below the threshold value epsilon and all subsequent means are also below the threshold value. To phrase this another way, we want to find the index after the last position in the array where the cumulative mean is above the threshold:
conv_index = find(~df_prog_mu,1,'last')+1;
In doing so, we guarantee that the value at the index, and all subsequent values, have converged below your predetermined threshold value.
I wouldn't imagine that the mean would suddenly become constant at a single index. Wouldn't it asymptotically approach a constant value? I would reccommend a for loop to calculate the mean (it sounds like maybe you've already done this part?) like this:
avg = [];
for k=1:length(x)
avg(k) = mean(x(1:k));
end
Then plot the consecutive mean:
plot(avg)
hold on % this will allow us to plot more data on the same figure later
If you're trying to find the point at which the consecutive mean comes within a certain range of the true mean, try this:
Tavg = 5; % or whatever your true mean is
err = 0.01; % the range you want the consecutive mean to reach before we say that it "became constant"
inRange = avg>(Tavg-err) & avg<(Tavg+err); % gives you a binary logical array telling you which values fell within the range
q = 1000; % set this as high as you can while still getting a value for consIndex
constIndex = [];
for k=1:length(inRange)
if(inRange(k) == sum(inRange(k:k+q))/(q-1);)
constIndex = k;
end
end
The below answer takes a similar approach but makes an unsafe assumption that the first value to fall within the range is the value where the function starts to converge. Any value could randomly fall within that range. We need to make sure that the following values also fall within that range. In the above code, you can edit "q" and "err" to optimize your result. I would recommend double checking it by plotting.
plot(avg(constIndex), '*')
I am using MATLAB to find the number of peaks of a signal.
I'm trying to plot the number of peaks of a signal filtered with N-point moving average filter, N goes from 2 to 30.(I also consider the number of peaks when no filter has applied at the beginning of the resulting array) My data array(imported from csv and has double values between 0 and 1) has around 50k points. When I give part of the data i.e 100, 500 or 1000 points, using array slicing, # of peaks decrease as expected. However, when I give the whole data or even 2000 points, the number of peaks stays same at 127.
I changed the number of data given to the filter to find out why this happens. I changed the commented lines like showed in the comment and tried. When less than 1000 data points given plot was fine.
Here is the signal
https://www.dropbox.com/s/e1bkcjn5ta5q610/exampleSignal.csv?dl=0
Please import it from 4th element to end, it has some strange data at the beginning, I have not taken them, VarName1 is the imported column vector's name
numberOfPeaks = zeros(30,1,'int8');
pks = findpeaks(VarName1); % VarName1(1:1000,:) (when no filter applied)
numberOfPeaks(1) = size(pks,1);
for i=2:30
h = 1/i*ones(1,i,'double');
y = filter(h,1,VarName1); % VarName1(1:1000,:)
numberOfPeaks(i) = size(findpeaks(y),1);
end
plot(1:30,numberOfPeaks);
I expect a plot like this when whole the data is given:
but I get:
I realised that the problem is int8 I use. It can only take up to 127 and this caused my big results to be as 127.
Turning it into double solves the problem.
I would like to identify the largest possible contiguous subsample of a large data set. My data set consists of roughly 15,000 financial time series of up to 360 periods in length. I have imported the data into MATLAB as a 360 by 15,000 numerical matrix.
This matrix contains a lot of NaNs due to some of the financial data not being available for the entire period. In the illustration, NaN entries are shown in dark blue, and non-NaN entries appear in light blue. It is these light blue non-NaN entries which I would like to ideally combine into an optimal subsample.
I would like to find the largest possible contiguous block of data that is contained in my matrix, while ensuring that my matrix contains a sufficient number of periods.
In a first step I would like to sort my matrix from left to right in descending order by the number of non-NaN entries in each column, that is, I would like to sort by the vector obtained by entering sum(~isnan(data),1).
In a second step I would like to find the sub-array of my data matrix that is at least 72 entries along the first dimension and is otherwise as large as possible, measured by the total number of entries.
What is the best way to implement this?
A big warning (may or may not apply depending on context)
As Oleg mentioned, when an observation is missing from a financial time series, it's often missing for reason: eg. the entity went bankrupt, the entity was delisted, or the instrument did not trade (i.e. illiquid). Constructing a sample without NaNs is likely equivalent to constructing a sample where none of these events occur!
For example, if this were hedge fund return data, selecting a sample without NaNs would exclude funds that blew up and ceased trading. Excluding imploded funds would bias estimates of expected returns upwards and estimates of variance or covariance downwards.
Picking a sample period with the fewest time series with NaNs would also exclude periods like the 2008 financial crisis, which may or may not make sense. Excluding 2008 could lead to an underestimate of how haywire things could get (though including it could lead to overestimate the probability of certain rare events).
Some things to do:
Pick a sample period as long as possible but be aware of the limitations.
Do your best to handle survivorship bias: eg. if NaNs represent delisting events, try to get some kind of delisting return.
You almost certainly will have an unbalanced panel with missing observations, and your algorithm will have to be deal with that.
Another general finance / panel data point, selecting a sample at some time point t and then following it into the future is perfectly ok. But selecting a sample based upon what happens during or after the sample period can be incredibly misleading.
Code that does what you asked:
This should do what you asked and be quite fast. Be aware of the problems though if whether an observation is missing is not random and orthogonal to what you care about.
Inputs are a T by n sized matrix X:
T = 360; % number of time periods (i.e. rows) in X
n = 15000; % number of time series (i.e. columns) in X
T_subsample = 72; % desired length of sample (i.e. rows of newX)
% number of possible starting points for series of length T_subsample
nancount_periods = T - T_subsample + 1;
nancount = zeros(n, nancount_periods, 'int32'); % will hold a count of NaNs
X_isnan = int32(isnan(X));
nancount(:,1) = sum(X_isnan(1:T_subsample, :))'; % 'initialize
% We need to obtain a count of nans in T_subsample sized window for each
% possible time period
j = 1;
for i=T_subsample + 1:T
% One pass: add new period in the window and subtract period no longer in the window
nancount(:,j+1) = nancount(:,j) + X_isnan(i,:)' - X_isnan(j,:)';
j = j + 1;
end
indicator = nancount==0; % indicator of whether starting_period, series
% has no NaNs
% number of nonan series of length T_subsample by starting period
max_subsample_size_by_starting_period = sum(indicator);
max_subsample_size = max(max_subsample_size_by_starting_period);
% find the best starting period
starting_period = find(max_subsample_size_by_starting_period==max_subsample_size, 1);
ending_period = starting_period + T_subsample - 1;
columns_mask = indicator(:,starting_period);
columns = find(columns_mask); %holds the column ids we are using
newX = X(starting_period:ending_period, columns_mask);
Here's an idea,
Assuming you can rearrange the series, calculate the distance (you decide the metric, but if looking at is nan vs not is nan, Hamming is ok).
Now hierarchically cluster the series and rearrange them using either a dendrogram
or http://www.mathworks.com/help/bioinfo/examples/working-with-the-clustergram-function.html
You should probably prune any series that doesn't have a minimum number of non nan values before you start.
First I have only little insight in financial mathematics. I understood it that you want to find the longest continuous chain of non-NaN values for each time series. The time series should be sorted depending on the length of this chain and each time series, not containing a chain above a threshold, discarded. This can be done using
data = rand(360,15e3);
data(abs(data) <= 0.02) = NaN;
%% sort and chop data based on amount of consecutive non-NaN values
binary_data = ~isnan(data);
% find edges, denote their type and calculate the biggest chunk in each
% column
edges = [2*binary_data(1,:)-1; diff(binary_data, 1)];
chunk_size = diff(find(edges));
chunk_size(end+1) = numel(edges)-sum(chunk_size);
[row, ~, id] = find(edges);
num_row_elements = diff(find(row == 1));
num_row_elements(end+1) = numel(chunk_size) - sum(num_row_elements);
%a chunk of NaN has a -1 in id, a chunk of non-NaN a 1
chunks_per_row = mat2cell(chunk_size .* id,num_row_elements,1);
% sort by largest consecutive block of non-NaNs
max_size = cellfun(#max, chunks_per_row);
[max_size_sorted, idx] = sort(max_size, 'descend');
data_sorted = data(:,idx);
% remove all elements that only have block sizes smaller then some number
some_number = 20;
data_sort_chop = data_sorted(:,max_size_sorted >= some_number);
Note that this can be done a lot simpler, if the order of periods within a time series doesn't matter, aka data([1 2 3],id) and data([3 1 2], id) are identical.
What I do not know is, if you want to discard all periods within a time series that don't correspond to the biggest value, get all those chains as individual time series, ...
Feel free to drop a comment if it has to be more specific.
I have an EMG signal of a subject walking on a treadmill.
We used footswitches to be able to see when the subject is placing his foot, so we can see how many periods (steps) there are in time.
We would like to cut the signal in periods (steps) and get one average signal (100% step cycle).
I tried the reshape function but it does not work
when I count 38 steps:
nwaves = 38;
sig2 = reshape(sig,[numel(sig)/nwaves nwaves])';
avgSig = mean(sig2,1);
plot(avgSig);
the error displayed is this: Size arguments must be real integers.
Can anyone help me with this? Thanks!
First of all, reshaping the array is a bad approach to the problem. In real world one cannot assume that the person on the treadmill will step rhythmically with millisecond-precision (i.e. for the same amount of samples).
A more realistic approach is to use the footswitch signal: assume is really a switch on a single foot (1=foot on, 0=foot off), and its actions are filtered to avoid noise (Schmidt trigger, for example), you can get the samples index when the foot is removed from the treadmill with:
foot_off = find(diff(footswitch) < 0);
then you can transform your signal in a cell array (variable lengths) of vectors of data between consecutive steps:
step_len = diff([0, foot_off, numel(footswitch)]);
sig2 = mat2cell(sig(:), step_len, 1);
The problem now is you can't apply mean() to the signal slices in order to get an "average step": you must process each step first, then average the results.
It's probably because numel(sig)/nwaves isn't an integer. You need to round it to the nearest integer with round(numel(sig)/nwaves).
EDIT based on comments:
Your problem is you can't divide 51116 by 38 (it's 1345.2), so you can't reshape your signal in chunks of 38 long. You need a signal whose length is exactly a multiple of 38 if you want to be able to reshape it in chunks of 38. Either that, or remove the last (or first) 6 values from your signal to have an exact multiple of 38 (1345 * 38 = 51110):
nwaves = 38;
n_chunks = round(numel(sig)/nwaves);
max_sig_length = n_chunks * nwaves;
sig2 = reshape(sig(1:max_sig_length),[n_chunks nwaves])';
avgSig = mean(sig2,1);
plot(avgSig);
I use imtophat to apply a filter to an m x n array. I then find the local max using imextendedmax(). I get mostly 0's everywhere except for 1's in the general areas where I am expecting a local max. The weird thing is, though, that I don't get just one local max. Instead in these places I get MANY elements with 1's such as
00011100000
00111111000
00000110000
yet the values there are close but NOT equal so I would expect that there would be one that is higher than all of the rest. So I'm wondering:
if this is a bug and how I might fix it
how you would choose choose the element of these 1's with the highest value.
a) This is a feature. You are calling imextendedmax with two input arguments. The second input is a measure for how different pixels can be from the maximum and still be counted for the extended maximum.
b) You can choose the elements with the highest value using max on the pixels within the group.
%# for testing, create a mask with two groups and an image of corresponding size
msk = repmat([00011100000;...
00111111000;...
00000110000],1,2);
img = rand(size(msk));
imSize = size(img);
%# to find groups of connected ones, apply connected component labeling
cc = bwconncomp(msk);
%# loop through all groups and find the location of the maximum intensity pixel
%# You could do this without loop, but it would be much less readable
maxCoordList = zeros(cc.NumObjects,2);
for i = 1:cc.numObjects
%# read intensities corresponding to group
int = img(cc.PixelIdxList{i});
%# find which pixel is brightest
[maxInt,maxIdx] = max(int);
%# maxIdx indexes into PixelIdxList, which indexes into the image.
%# convert to [x,y]
maxCoordList = ind2sub(imSize,cc.PixelIdxList{i}(maxIdx));
end
%# confirm by plotting
figure
imshow(img,[])
hold on
plot(maxCoordList(:,2),maxCoordList(:,1),'.r')