I am using MATLAB R2020a on a MacOS. I have a signal 'cycle_periods' consisting of the cycle periods of an ECG signal on which I would like to perform an exponentially weighted mean, such that older values are less weighted than newer ones. However, I would like this to be done on an element-by-element basis such that a given element is only included in the overall weighted mean if the weighted mean with the current sample does not exceed 1.5 times or go below 0.5 times the weighted mean without the element.
I have used the dsp.MovingAverage function as shown below to calculate the weighted mean, but I am really unsure as to how to manipulate the function to include my conditions.
% Exponentially weighted moving mean for stable cycle periods
movavgExp = dsp.MovingAverage('Method', 'Exponential weighting', 'ForgettingFactor', 0.1);
mean_cycle_period_exp = movavgExp(cycle_period_stable);
I would very much appreciate any help regarding this matter, thanks in advance.
Related
I'm dealing with CWT, and I have a big problem converting scales to frequencies. In the MAtlab Wavelet Tutorial they use this expression to convert scales to frequencies
But if i use the default function scal2freq I obtain different result.
I don't understand the role of the Morlet Fourier Factor
Thanks in advance
It is a pretty complicated concept, which I somewhat understand it. I'll write some points here so that you might figure it out yourself, rather easier.
A simple fact is that:
Scale is inversely proportional to frequency.
For example, imagine we have a 1-100 Hz range of frequencies in some time series data such as stock markets data or earthquake data. Scale is "supposed to be" the inverse of that. For instance, if scale would be in range of 1 to 100, we'd have had:
Scale(1/Hz) Frequency (Hz)
1 100
50 50
100 1
Therefore,
The frequency is not the real frequency of those time series data (e.g., stock market, earthquake) that we know of. They are only related, inversely.
And we can safely say that here we are calculating some "pseudo-frequencies", which MATLAB does that (by approximating that). You can read about the approximation process in the documentation in the section pseudo-frequencies:
MATlAB does calculate those pseudo-frequencies based on:
In wavelet analysis, the way to relate scales to frequencies is to determine the center frequency of the wavelet function:
which you can visually see in this image and of-course it would differ, when we would change the types of our function in the calculation. Thus, that center frequency will change everytime in our approximation process:
That "MorletFourierFactor" is a variable to approximate a constant so that when you would do the 1/scale, it would closely approximate those "pseudo-frequencies".
I thought this image about shifting (time axis) and scaling (frequency axis) might be a little helpful to look into as well:
The bottom line is that don't worry about pseudo-frequencies, you wouldn't probably need those. If you would want any frequency spectrum, you can likely go towards applying some of those frequency methods (such as Fast Fourier Transform) on whatever time series data that you have.
If you really really want to map that, you can also try to design some methods to approximate it yourself.
Source
Harvard Seismology
I understand that if a number gets closer to zero than realmin, then Matlab converts the double to a denorm . I am noticing this causes significant performance cost. In particular I am using a gradient descent algorithm that when near convergence, the gradients (in backprop for my bespoke neural network) drop below realmin such that the algorithm incurs heavy performance cost (due to, I am assuming, type conversion behind the scenes). I have used the following code to validate my gradient matrices so that no numbers falls below realmin:
function mat= validateSmallDoubles(obj, mat, threshold)
mat= mat.*(abs(mat)>threshold);
end
Is this usual practice and what value should threshold take (obviously you want this as close to realmin as possible, but not too close otherwise any additional division operations will send some elements of mat below realmin after validation)?. Also, specifically for neural networks, where are the best places to do gradient validation without ruining the network's ability to learn?. I would be grateful to know what solutions people with experience in training neural networks have? I am sure this is a problem for all languages. Tentative threshold values have ruined my network's learning.
I do not know if it is somehow related to your problem, but I had a similar problem with underflows while doing exponentially weighted average of gradients (say while implementing Momentum or Adam).
In particular, at some point you do something like:
v := 0.9*v + 0.1*gradient where v is the exponentially weighted average of your gradient g. If in a lot of successive iterations a same element of your g matrix remains 0, your v is quickly becoming very small and you hit dernormals.
So the problem, is why all those zeros ? In my case the culprit where the ReLu units which outputed a lot of zeros (if x<0 , relu(x) is zero). Because when Relu outputs zero on a given neurons the related weight has no effect it means the corresponding partial derivative will be zero in g. So it happened to me that in a lot of successive iterations that particular neuron was not fired.
To avoiding having zero activations (and derivatives), I used "leaky relu" so to have a very small derivative instead.
Another solution, is to use gradient clipping before applying your weighted average to threshold your gradients to a minimum value. Which is quite similar to what you did.
I traced the diminishing gradient occurrences to the Adam SGD optimiser - the biased moving average matrix calculations in the Adam optimiser were causing matlab to carry out the denorm operation. I simply thresholded the matrix elements for each layer after these calculations, with threshold=10*realmin, to zero without any effect on learning. I have yet to investigate why my moving averages were getting so close to zero as my architecture and weight initialisation priors would normally mitigate this.
I am trying to estimate the Estimating Quasi-stationary part of a signal in Matlab. It is a 1 second long sound signal that belongs to a bird.
I am using MFCC to extract features but would like to have a window size for MFCC that is guaranteed to operate on statistically quasi-stationary part.
My questions are:
Do you think it is a solid approach if I iterate by varying my window size from 1 second to a smaller interval by observing the change of second moment of features and making a decision where the second moment is not changing anymore?
If I use Shannon entropy method by again varying my MFCC window size, how the number of bits I got at the output of the entropy algorithm would help me to identify the Estimating Quasi-stationary part of the signal
Are there any other ideas?
Q:MATLAB RELATED: Can someone help me with a MATLAB code for block averaging of time series dataset? Also how do I determine the optimal number of blocks
Background:I have a large time series dataset (position versus time) which I break into 20 smaller blocks. I need to find the variance of position for each block. Since there is a possibility that there is autocorrelation of data, normal averaging doesn't work for me, and I would need to perform block averaging.
Check out the reshape command followed by a mean and/or var (for average or variance)
At the MATLAB terminal, type:
help reshape
I'm running a PIV analysis on two consecutive images taken during an experiment to get the vector field. But I would like to know, based on what criteria do I have to choose the percentage of overlap between the tow images for the cross-correlation process? 50%, 75%...? The PIVlab_GUI tool designed for MATLAB chooses a 50% overlap by default, but it allows changing it.
I just want to know the criteria based on which I can know how much overlap is best? Do the vectors become less accurate, dependent.etc, as we increase/decrease the overlap?
My book "Fluid Mechanics Measurements" does not explain how to choose the overlap amount in the cross-correlation process, and I could not find any helpful online reference.
Any help is appreciated.
I suggest you read up on spectral estimation - which is basically equivalent to cross correlation when you segment the data and average the correlation estimates calculated from each segment (the cross correlation is the inverse Fourier transform of the cross spectrum). There's a book chapter on this stuff here, but you may want to find a more complete resource if you are unclear on the basics.
A short answer: increasing the overlap will increase the frequency resolution of the spectral estimate, and give you more segments to average over; your estimate will have a lower variance. But there are diminishing statistical returns the more you increase your overlap past 50%, while the computational complexity continues to rise (more segments = more calculations). Hence most people just choose 50% and have done with it.
It's important to note that you don't get any more information by using overlapping frames, you are simply increasing the frequency resolution (or time lag resolution, for correlation) - similar to the effect of zero-padding a signal before taking its Fourier transform - and this has statistical effects due to the way estimation of this type works.