matlab discrete wavelet transform wfastmod in wmulden - matlab

I am interested to derive the discrete wavelet transform for noise reduction of more than 50,000 data points. I am using wmulden - matlab tool for wavelet tranform. Under this function, wfastmcd, an another function is being called which takes only 50000 data points at a time. It would be highly helpful if anyone suggests how to partition the data point to get the transform of entire data set or if there is any other matlab tool available for these kind of calculations.

I've used a for loop to solve that one.
First of all, I've calculated how many "steps" I needed to take on my signal, on a fixed size window of 50000, like:
MAX_SAMPLES = 50000;
% mySignalSize is the size of my samples vector.
steps = ceil(mySignalSize/MAX_SAMPLES);
After that, I've applied the wmulden function "steps" times, checking every time if my step is not larger than the original signal vector size, like the following:
% Wavelet fields
level = 5;
wname = 'sym4';
tptr = 'sqtwolog';
sorh = 's';
npc_app = 'heur';
npc_fin = 'heur';
den_signal = zeros(mySignalSize,1);
for i=1:steps
if (i*MAX_SAMPLES) <= mySignalSize
x_den = wmulden(originalSignal( (((i-1) * MAX_SAMPLES) + 1) : (i*MAX_SAMPLES) ), level, wname, npc_app, npc_fin, tptr, sorh);
den_signal((((i-1) * MAX_SAMPLES) + 1):i*MAX_SAMPLES) = x_den;
else
old_step = (((i-1) * MAX_SAMPLES) + 1);
new_step = mySignalSize - old_step;
last_step = old_step + new_step;
x_den = wmulden(originalSignal( (((i-1) * MAX_SAMPLES) + 1) : last_step ), level, wname, npc_app, npc_fin, tptr, sorh);
den_signal((((i-1) * MAX_SAMPLES) + 1):last_step) = x_den;
end
end
That should do the trick.

Related

How to resample data input signal for a different number of samples?

I am using MATLAB R2020a on a MacOS. I have an ECG signal with a sampling frequency of 1kHz. From cycle to cycle, there are a different number of samples since the cycle lenghts are different. However, I would like to have an array which has an equal number of samples (around 800) for each cycle such that those 800 sample points are automatically fitted to the original sample points regardless of the number of original sample points. I know that the resample function allows resampling at a fraction of the frequency of the input signal, but I am not sure how this would help me achieve my aim given that I would like to resample for a fixed number of points.
I would very much appreciate any suggestions. Thanks in advance
Here is my code:
% Delimit cycles in original maximum amplitude signal using indices from
% Pan Tompkins algorithm
number_cycles = round(length(qrs_i_raw)/2);
number_samples = 900;
cycle_points_maxamp = zeros(number_cycles, length(number_samples));
x_eachcycle = zeros(number_cycles, length(number_samples));
values_x = zeros(length(number_samples), 1);
y_eachcycle = zeros(number_cycles, length(number_samples));
values_y = zeros(length(number_samples), 1);
z_eachcycle = zeros(number_cycles, length(number_samples));
values_z = zeros(length(number_samples), 1);
v_eachcycle = zeros(number_cycles, length(number_samples));
w_eachcycle = zeros(number_cycles, length(number_samples));
for currentcycle = 1:length(number_cycles)
values_maxamp = maxamp(qrs_i_raw(currentcycle):qrs_i_raw(currentcycle + 1)); % need to resample to only generate 900 samples
cycle_points_maxamp(currentcycle, 1:length(values_maxamp)) = values_maxamp;
values_z(1 + 2*tau_milli_rounded(currentcycle):end) = cycle_points_maxamp(currentcycle, 1:end - 2*tau_milli_rounded(currentcycle));
z_eachcycle(currentcycle, 1:length(values_z)) = values_z;
values_y(1 + tau_milli_rounded(currentcycle):end) = cycle_points_maxamp(currentcycle, 1:end - tau_milli_rounded(currentcycle));
y_eachcycle(currentcycle, 1:length(values_y)) = values_y;
values_x(1:end) = cycle_points_maxamp(currentcycle, 1:end);
x_eachcycle(currentcycle, 1:length(values_x)) = values_x;
values_v = ((1/sqrt(6))*(x_eachcycle(currentcycle, 1:length(values_x))) + (y_eachcycle(currentcycle, 1:length(values_y))) - 2*(z_eachcycle(currentcycle, 1:length(values_z))));
v_eachcycle(currentcycle, 1:length(values_v)) = values_v;
values_w = ((1/sqrt(2))*(x_eachcycle(currentcycle, 1:length(values_x))) - (y_eachcycle(currentcycle, 1:length(values_y))));
w_eachcycle(currentcycle, 1:length(values_w)) = values_w;
end
Here are the relevant variables:
maxamp
qrs_i_raw
cyclepointsarray
To resample a single cycle of the signal to have a length of 800 points/samples the resample() function can be used with parameters 800 and length(Random_Cycle). Here I used a sinc signal arbitrarily. The resample() function will interpolate (upsample) or decimate (downsample) according to the number of available sample points in the original signal. I would first partition your signal into separate cycles apply this type of resampling the concatenate the resampled components to generate your full resampled ECG signal.
Random_Cycle = sinc(-2*pi:0.5:2*pi);
subplot(1,2,1); stem(Random_Cycle);
title("Number of Samples: " + num2str(length(Random_Cycle)));
Resampled_Signal = resample(Random_Cycle,800,length(Random_Cycle));
subplot(1,2,2); stem(Resampled_Signal);
title("Number of Samples: " + num2str(length(Resampled_Signal)));
Ran using MATLAB R2019b

Approximation of cosh and sinh functions that give large values in MATLAB

My calculation involves cosh(x) and sinh(x) when x is around 700 - 1000 which reaches MATLAB's limit and the result is NaN. The problem in the code is elastic_restor_coeff rises when radius is small (below 5e-9 in the code). My goal is to do another integral over a radius distribution from 1e-9 to 100e-9 which is still a work in progress because I get stuck at this problem.
My work around solution right now is to approximate the real part of chi_para with a step function when threshold2 hits a value of about 300. The number 300 is obtained from using the lowest possible value of radius and look at the cut-off value from the plot. I think this approach is not good enough for actual calculation since this value changes with radius so I am looking for a better approximation method. Also, the imaginary part of chi_para is difficult to approximate since it looks like a pulse instead of a step.
Here is my code without an integration over a radius distribution.
k_B = 1.38e-23;
T = 296;
radius = [5e-9,10e-9, 20e-9, 30e-9,100e-9];
fric_coeff = 8*pi*1e-3.*radius.^3;
elastic_restor_coeff = 8*pi*1.*radius.^3;
time_const = fric_coeff/elastic_restor_coeff;
omega_ar = logspace(-6,6,60);
chi_para = zeros(1,length(omega_ar));
chi_perpen = zeros(1,length(omega_ar));
threshold = zeros(1,length(omega_ar));
threshold2 = zeros(1,length(omega_ar));
for i = 1:length(radius)
for k = 1:length(omega_ar)
omega = omega_ar(k);
fric_coeff = 8*pi*1e-3.*radius(i).^3;
elastic_restor_coeff = 8*pi*1.*radius(i).^3;
time_const = fric_coeff/elastic_restor_coeff;
G_para_func = #(t) ((cosh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))-1).*exp(1i.*omega.*t))./(cosh(2*k_B*T./elastic_restor_coeff)-1);
G_perpen_func = #(t) ((sinh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))).*exp(1i.*omega.*t))./(sinh(2*k_B*T./elastic_restor_coeff));
chi_para(k) = (1 + 1i*omega*integral(G_para_func, 0, inf));
chi_perpen(k) = (1 + 1i*omega*integral(G_perpen_func, 0, inf));
threshold(k) = 2*k_B*T./elastic_restor_coeff*omega;
threshold2(k) = 2*k_B*T./elastic_restor_coeff*(omega*time_const - 1);
end
figure(1);
semilogx(omega_ar,real(chi_para),omega_ar,imag(chi_para));
hold on;
figure(2);
semilogx(omega_ar,real(chi_perpen),omega_ar,imag(chi_perpen));
hold on;
end
Here is the simplified function that I would like to approximate:
where x is iterated in a loop and the maximum value of x is about 700.

Iterative quantile estimation in Matlab

I'm trying to implement an interative algorithm to estimate quantiles in data that is generated from a Monte-Carlo simulation. I want to make it iterative, because I have many iterations and variables so storing all data points and using Matlab's quantile function would take much of the memory that I actually need for the simulation.
I found some approaches based on the Robbin-Monro process, given by
The implementation with a control sequence ct = c / t where c is constant is quite straight forward. In the cited paper, they show that c = 2 * sqrt(2 * pi) gives quite good results, at least for the median. But they also propose an adaptive approach based on an estimation of the histogram. Unfortunately, I haven't figured out how to implement this adaptation yet.
I tested the implementation with a constant c for three test samples with 10.000 data points. The value c = 2 * sqrt(2 * pi) did not work well for me, but c = 100 looks quite good for the test samples. However, this selction is not very robust and failed in the actual Monte-Carlo simulation giving results wide off the mark.
probabilities = [0.1, 0.4, 0.7];
controlFactor = 100;
quantile = zeros(size(probabilities));
indicator = zeros(size(probabilities));
for index = 1:length(data)
control = controlFactor / index;
indices = (data(index) >= quantile);
indicator(indices) = probabilities(indices);
indices = (data(index) < quantile);
indicator(indices) = probabilities(indices) - 1;
quantile = quantile + control * indicator;
end
Is there a more robust solution for iterative quantile estimation or does anyone have an implementation for an adaptive approach with small memory consumption?
After trying some of the adaptive iterative approaches that I found in literature without great success (not sure, if I did it right), I came up with a solution that gives me good results for my test samples and also for the actual Monte-Carlo-Simulation.
I buffer a subset of simulation results, compute the sample quantiles and average over all subset sample quantiles in the end. This seems to work quite well and without tuning many parameters. The only parameter is the buffer size which is 100 in my case.
The results converge quite fast and increasing sample size does not improve the results dramatically. There seems to be a small but constant bias that presumably is the averaged error of the subset sample quantiles. And that is the downside of my solution. By choosing the buffer size, one fixes the achievable accuracy. Increasing the buffer size reduces this bias. In the end, it seems to be a memory and accuracy tradeoff.
% Generate data
rng('default');
data = sqrt(0.5) * randn(10000, 1) + 5 * rand(10000, 1) + 10;
% Set parameters
probabilities = 0.2;
% Compute reference sample quantiles
quantileEstimation1 = quantile(data, probabilities);
% Estimate quantiles with computing the mean over a number of subset
% sample quantiles
subsetSize = 100;
quantileSum = 0;
for index = 1:length(data) / subsetSize;
quantileSum = quantileSum + quantile(data(((index - 1) * subsetSize + 1):(index * subsetSize)), probabilities);
end
quantileEstimation2 = quantileSum / (length(data) / subsetSize);
% Estimate quantiles with iterative computation
quantileEstimation3 = zeros(size(probabilities));
indicator = zeros(size(probabilities));
controlFactor = 2 * sqrt(2 * pi);
for index = 1:length(data)
control = controlFactor / index;
indices = (data(index) >= quantileEstimation3);
indicator(indices) = probabilities(indices);
indices = (data(index) < quantileEstimation3);
indicator(indices) = probabilities(indices) - 1;
quantileEstimation3 = quantileEstimation3 + control * indicator;
end
fprintf('Reference result: %f\nSubset result: %f\nIterative result: %f\n\n', quantileEstimation1, quantileEstimation2, quantileEstimation3);

saving output of for loop when dimensions of output vary as a function of iterator in matlab

I am attempting to fit some UV/Vis absorbance spectra to published reference standards. In general, the absorbance one obtains from the spectrometer is equal to a linear combination of the concentration of each absorber multiplied by the cross-section of absorption for each molecule at each wavelength (and multiplied by the pathlength of the spectrometer).
That said, not all spectrometers are precise in the x axis (wavelength), so some adjustment may be necessary to fit one's experimental data to the reference standards.
In this script, I am adjusting the index of my wavelength and spectral intensity to see if integer steps of my spectra result in a better fit to the reference standards (each step is 0.08 nm). Of course, I need to save the output of the fit parameters; however, since each fit has a different set of dimensions, I'm having difficulty just throwing them into a structure(k) (commented out in the following code snippet).
If anyone has a tip or hint, I'd be very appreciative. The relevant portion of my sample code follows:
for i = -15:15
lengthy = length(wavelengthy)
if i >= 0
xvalue = (1:lengthy - abs(i))
yvalue = (1+abs(i):lengthy)
else
xvalue = (1+abs(i):lengthy)
yvalue = (1: lengthy - abs(i))
end
Phi = #(k,wavelengthy) ( O3standard(yvalue) .* k(1) + Cl2standard(yvalue) .* k(2) + ClOstandard(yvalue) .* k(3) + OClOstandard(yvalue) .* k(4));
[khat, resnorm, residual, exitflag, output, lambda, jacobian] = lsqcurvefit(Phi,k0,wavelengthy(xvalue),workingspectra(yvalue), lowerbound, upperbound, options);
%parameters.khat(k,:) = khat;
%parameters.jacobian(k,:) = jacobian;
%parameters.exitflag(k,:) = exitflag;
%parameters.output(k,:) = output;
%parameters.residuals(k,:) = residual
%concentrations(:,k) = khat./pathlength
k=k+1
end

MATLAB code for Harmonic Product Spectrum

Can someone tell me how I can implement Harmonic Product Spectrum using MATLAB to find the fundamental frequency of a note in the presence of harmonics?? I know I'm supposed to downsample my signal a number of times (after performing fft of course) and then multiply them with the original signal.
Say my fft signal is "FFT1"
then the code would roughly be like
hps1 = downsample(FFT1,2);
hps2 = downsample(FFT1,3);
hps = FFT1.*hps1.*hps2;
Is this code correct??? I want to know if I've downsampled properly and since each variable has a different length multiplying them results in matrix dimension error.. I really need some real quick help as its for a project work... Really desperate....
Thanx in advance....
OK you can't do "hps = FFT1.*hps1.*hps2;" for each downsampled data, do you have different sizes ...
I did a example for you how make a very simple Harmonic Product Spectrum (HPS) using 5 harmonics decimation (downsample), I just test in sinusoidal signals, I get very near fundamental frequency in my tests.
This code only shows how to compute the main steps of the algorithm, is very likely that you will need improve it !
Source:
%[x,fs] = wavread('ederwander_IN_250Hz.wav');
CorrectFactor = 0.986;
threshold = 0.2;
%F0 start test
f = 250;
fs = 44100;
signal= 0.9*sin(2*pi*f/fs*(0:9999));
x=signal';
framed = x(1:4096);
windowed = framed .* hann(length(framed));
FFT = fft(windowed, 4096);
FFT = FFT(1 : size(FFT,1) / 2);
FFT = abs(FFT);
hps1 = downsample(FFT,1);
hps2 = downsample(FFT,2);
hps3 = downsample(FFT,3);
hps4 = downsample(FFT,4);
hps5 = downsample(FFT,5);
y = [];
for i=1:length(hps5)
Product = hps1(i) * hps2(i) * hps3(i) * hps4(i) * hps5(i);
y(i) = [Product];
end
[m,n]=findpeaks(y, 'SORTSTR', 'descend');
Maximum = n(1);
%try fix octave error
if (y(n(1)) * 0.5) > (y(n(2))) %& ( ( m(2) / m(1) ) > threshold )
Maximum = n(length(n));
end
F0 = ( (Maximum / 4096) * fs ) * CorrectFactor
plot(y)
HPS usually generates an error showing the pitch one octave up, I change a bit a code, see above :-)